Last Update 1:06 PM September 18, 2024 (UTC)

Company Feeds | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!

Wednesday, 18. September 2024

Thales Group

Vulnerable APIs and Bot Attacks Costing Businesses up to $186 Billion Annually

Vulnerable APIs and Bot Attacks Costing Businesses up to $186 Billion Annually prezly Wed, 09/18/2024 - 15:00 API insecurity and automated abuse by bots responsible for up to 11.8% of cyber events and losses globally Bot-related security incident count rose 88% in 2022 and 28% in 2023 Insecure APIs result in up to $12 billion more in losses than they did in 2021
Vulnerable APIs and Bot Attacks Costing Businesses up to $186 Billion Annually prezly Wed, 09/18/2024 - 15:00 API insecurity and automated abuse by bots responsible for up to 11.8% of cyber events and losses globally Bot-related security incident count rose 88% in 2022 and 28% in 2023 Insecure APIs result in up to $12 billion more in losses than they did in 2021
@Thales

Imperva, a Thales company, the cybersecurity leader that protects critical applications, APIs, and data, anywhere at scale, releases the “Economic Impact of API and Bot Attacks” report. The analysis of more than 161,000 unique cybersecurity incidents and investigates the rising global costs of vulnerable or insecure APIs and automated abuse by bots, two security threats that are increasingly interconnected and prevalent. The report estimates that API insecurity and bot attacks result in up to $186[1]  billion for businesses around the world.

The report is based on a study conducted by the Marsh McLennan Cyber Risk Intelligence Center which found that larger organizations were statistically more likely to have a higher percentage of security incidents that involved both insecure APIs and bot attacks. Enterprises with revenues of more than $1 billion were 2-3x more likely to experience automated API abuse by bots than small or mid-size businesses. The study suggests that large companies are particularly vulnerable to security risks associated with automated API abuse by bots because of complex and widespread API ecosystems that often contain exposed or insecure APIs.

Enterprises rely heavily on APIs to enable seamless communication between diverse applications and services. Data from Imperva Threat Research finds that the average enterprise managed 613 API endpoints in production last year. That number is growing rapidly as businesses face mounting pressure to deliver digital services with greater agility and efficiency.

Due to this increased reliance and their direct access to sensitive data, APIs have become attractive targets for bot operators. In 2023, automated threats accounted for 30% of all API attacks, according to data from Imperva Threat Research. Today, automated API abuse by bots costs organizations up to $17.9 billion of losses annually. As the number of APIs in production multiplies, cybercriminals will increasingly use automated bots to find and exploit API business logic, circumvent security measures, and exfiltrate sensitive data.

“It’s imperative that businesses across the world address the security risks posed by insecure APIs and bot attacks, or they face a substantial economic burden,” says Nanhi Singh, General Manager of Application Security at Imperva, a Thales company. “The interconnected nature of these threats necessitates that companies take a holistic approach, integrating comprehensive security strategies for both bot and API attacks.”

Some of the key trends identified in the report include:

Increased API adoption and usage is growing the attack surface: The rapid adoption of APIs, inexperience of many API developers, and lack of collaboration between security and development teams has led insecure APIs to now result in up to $87 billion of losses annually, a $12 billion increase from 2021.
​ Bots negatively impact organizations’ bottom line: The widespread availability of attack tools and generative AI models has enhanced bot evasion techniques and enabled even low-skilled attackers to launch sophisticated bot attacks. Up to $116 billion of losses annually can be attributed to automated attacks by bots.
​ API and bot-related security incidents are becoming more frequent: In 2022, API-related security incidents rose by 40%, and bot-related security incidents spiked by 88%. These increases were fueled by a rise in digital transactions, the expanding use of APIs, and geopolitical tensions like the Russia-Ukraine conflict. In the following year 2023, as digital traffic began to stabilize and the pandemic-driven surge in internet activity subsided, the frequency of these incidents moderated. API-related security incidents grew by 9%, while bot-related security incidents jumped by 28%. The overall upward trend in attacks highlights the growing persistence and frequency of these threats.
​ Insecure APIs and bot attacks pose a significant threat to large enterprises: Companies with revenue of at least $100 billion are most likely to suffer security incidents related to insecure APIs or bot attacks. These threats constitute up to 26% of all security incidents experienced by such businesses.
​ Countries around the globe are vulnerable to API and bot attacks: Brazil experienced the highest percentage of events related to insecure APIs or bot attacks, with the threats accounting for up to 32% of all observed security incidents. This was closely followed by France (up to 28%), Japan (up to 28%), and India (up to 26%). While the percentage of events attributed to API and bot-related security incidents was lower in the United States, 66% of all reported events related to vulnerable APIs or automated abuse by bots occurred within the country.

“Reliance on APIs will continue to grow exponentially, driving connections to generative AI applications and large language models,” adds Singh. “At the same time, generative AI will also empower cybercriminals to create sophisticated bots at an accelerated and alarming rate. As API ecosystems expand and bots become more advanced, organizations should anticipate a significant rise in the economic impact of automated API abuse by bots unless proactive measures are taken.”

Additional Information:

Download a copy of the “The Economic Impact of API and Bot Attacks” report for additional insights on the business impact of API and bot-related security incidents. See how Imperva Advanced Bot Protection and API Security can protect websites, applications, and APIs from automated attacks and without affecting the flow of business-critical traffic. Read the Imperva Blog for the latest product and solution news, and threat intelligence from Imperva Threat Research.

[1] The overall total does not double count events that are both API and bot related.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

/sites/default/files/prezly/images/Generic%20banner%20option%204.png Documents [Prezly] Vulnerable APIs and Bot Attacks Costing Businesses up to $186 Billion Annually.pdf Contacts Marion Bonnet, Press and social media manager, Security and Cyber 18 Sep 2024 Type Press release Structure Defence and Security Digital Identity and Security Security Group Imperva, a Thales company, the cybersecurity leader that protects critical applications, APIs, and data, anywhere at scale, releases the “Economic Impact of API and Bot Attacks” report. The analysis of more than 161,000 unique cybersecurity incidents and investigates the rising global costs of vulnerable or insecure APIs and automated abuse by bots, two security threats that are increasingly interconnected and prevalent. The report estimates that API insecurity and bot attacks result in up to $186[1] billion for businesses around the world. prezly_689106_thumbnail.jpg Hide from search engines Off Prezly ID 689106 Prezly UUID 9e427ee7-9381-452b-be9a-fac03b89e011 Prezly url https://thales-group.prezly.com/vulnerable-apis-and-bot-attacks-costing-businesses-up-to-186-billion-annually Wed, 09/18/2024 - 17:00 Don’t overwrite with Prezly data Off

Dock

$CHEQ $DOCK Token Merger Approved: An Alliance for Decentralized Identity Adoption

We are thrilled to announce that the token merger between cheqd and Dock has been officially approved by both $CHEQ and $DOCK holders.  By harnessing the combined strengths of two industry pioneers, Dock and cheqd will accelerate the global adoption of decentralized identity and verifiable credentials, empowering individuals and

We are thrilled to announce that the token merger between cheqd and Dock has been officially approved by both $CHEQ and $DOCK holders. 

By harnessing the combined strengths of two industry pioneers, Dock and cheqd will accelerate the global adoption of decentralized identity and verifiable credentials, empowering individuals and organizations worldwide with secure and trusted digital identities.

Dock and cheqd will continue as independent companies serving distinct market sectors in unique ways. cheqd will continue to advance payment infrastructure and network-layer functionalities, while Dock will continue focused on issuance, verification, and monetization of verifiable credentials for Identity Solution Providers, including KYC, background check, and biometrics companies through their Certs platform. Read more about the alliance here.

With the approval of this token merger, $DOCK tokens will be swapped for $CHEQ tokens at the ratio of 18.5178 $DOCK to 1 $CHEQ. This is based on a 15 day historic average using the closing prices of both tokens. The migration is estimated to commence in the latter half of Q4. More details will be available soon.

Dock’s historical and future transactions will be migrated to the cheqd blockchain, guaranteeing continuity and providing enhanced functionality for all ongoing Dock operations.

Browse our FAQ to learn more about the alliance and token merger.

Majority Approval from cheqd and Dock Communities

The governance vote resulted in a 100% approval from both $CHEQ holders and $DOCK holders

This strong backing from both communities reflects the shared belief in the potential of this merger to unlock new opportunities for all parties involved and drive the future of decentralized identity.

What Does the Merger Mean for Dock and cheqd?

The two companies—cheqd and Dock—will remain independent legal entities, with projects and roadmaps remaining largely unchanged.

One of the most significant benefits of this collaboration is the increased interoperability it will provide. Dock will transition to a blockchain that is already being utilized by key players in the digital identity sector. By aligning ourselves with a widely adopted blockchain, we are positioning our solutions within a broader, interconnected ecosystem.

As a $DOCK token holder, this merger with $CHEQ brings a host of compelling benefits that enhance both the value and utility of your tokens, such as increased token liquidity, access to enhanced resources and tokenomics that benefit holders. Read all about the holder benefits.

Additionally, Dock’s migration of network traffic to cheqd will significantly boost activity on the cheqd network, bringing approximately 300% more traffic to the mainnet and 50% to the testnet. This will accelerate network effects, driving more adoption across industries and use cases.

This collaboration is set to increase demand for $CHEQ, as more identity transactions will occur across cheqd’s infrastructure, supporting a broader ecosystem of verifiable credentials and increasing token burn.The partnership of cheqd and Dock’s established ecosystems will forge a powerful network of over 100,000 community members and hundreds of active partners.

What Happens Next?

As we move forward, cheqd and Dock will announce the commencement dates for the following key activities:

Token Migration: The migration of $DOCK tokens to $CHEQ is expected to begin in the latter half of Q4. Porting Blockchain Transactions: Existing blockchain transactions on the Dock chain will be ported to the cheqd blockchain.

The cheqd and Dock teams will work closely with exchanges to facilitate the token migration, ensuring a seamless transition for all trading activities.

Post-migration, Dock will default to using the cheqd network, though we will still support clients who request to use an alternative chain, multiple blockchains, or ledgerless identity systems. We believe defaulting to the cheqd chain will ensure that Dock continues to operate within the most advanced and secure decentralized ID ecosystem.


A Defining Moment for the Decentralised Identity Market

By merging the $DOCK token with $CHEQ, we are unlocking unprecedented opportunities for our community, positioning you at the cutting edge of decentralized identity innovation.

The future of decentralized digital identity is bright, and with your $CHEQ tokens, you'll be part of a dynamic, growing ecosystem that is set to lead the industry. 

Dock and cheqd will shape a world where secure, verifiable credentials are the norm, and your involvement is key to making this vision a reality. The journey ahead is filled with potential, and we are thrilled to have you with us as we pave the way for the next era of digital identity.


Thales Group

Thales Australia and Underwood Innovation Labs sign an MoU to establish a collaborative Advanced Air Mobility (AAM) Centre of Excellence inQueensland, Australia

Thales Australia and Underwood Innovation Labs sign an MoU to establish a collaborative Advanced Air Mobility (AAM) Centre of Excellence inQueensland, Australia prezly Wed, 09/18/2024 - 06:00 Thales and Underwood Innovation Labs, the inaugural Australian government-backed innovation lab, signed a Memorandum of Understanding (MOU) to establish an Advanced Air Mobility Centre of Exc
Thales Australia and Underwood Innovation Labs sign an MoU to establish a collaborative Advanced Air Mobility (AAM) Centre of Excellence inQueensland, Australia prezly Wed, 09/18/2024 - 06:00 Thales and Underwood Innovation Labs, the inaugural Australian government-backed innovation lab, signed a Memorandum of Understanding (MOU) to establish an Advanced Air Mobility Centre of Excellence (AAM COE). Located in Queensland, Australia, the AAM COE will facilitate the growth of a scalable and collaborative UAV ecosystem in Advanced Air Mobility, create high-skilled jobs, and provide access to indoor, virtual and physical airspace for the safe design and testing of Remote Piloted Aircraft Systems (RPAS). ​ The establishment of the AAM COE aligns with the Australian Government's priorities identified in the Aviation White Paper regarding the Advanced Air Mobility (AAM) sector.
©Thales

Thales and Underwood Innovation Labs signed a Memorandum of Understanding (MOU) to establish an AAM Centre of Excellence. The AAM COE, supported by the Mayor of Logan City, Hon Jon Raven, will operate as a membership-based, open-ecosystem, enabling organisations to utilize and access state-of-the-art innovation, technology and resources. ​

The location of the AAM COE, in South East Queensland, is one of Australia’s fastest growing regions, with population numbers expected to reach 5.4million by 2041. As a key economic hub, the establishment of a centre of excellence will cultivate advanced technology and develop skills for Queensland’s future workforce.

The AAM COE is modelled after a successful initiative in Paris, France, known as Centre d’Excellence Drones Ile De France (CEDIF). CEDIF operates with an approved 40km Beyond Visual Line of Sight (BVLOS) airspace corridor extending from Saint-Quentin en Yvelines to Bretigny sur Orge. Supported by Thales, Eurocontrol, and Systematic, CEDIF aims to provide a comprehensive platform for incubating, validating, and industrializing all aspects of drone activities, both direct and indirect.

 

"Thales is thrilled to be the initial founding partner in establishing the forthcoming innovation ecosystem centred on a Centre of Excellence for AAM in Queensland, alongside Underwood Innovation Lab and the City of Logan. Our shared commitment to trust, innovation, and results will unite innovators in addressing everyday challenges, integrating drones and other advanced air mobility systems safely into our daily routines, and contributing to the decarbonization of the future aviation industry." - Bobby Pavlickovski, Head of Uncrewed Services, Thales Australia,.

“Underwood Innovation Lab is delighted to be partnering with Thales Australia to establish and deliver this catalytic project for Queensland which will propel the Advanced Air Mobility sector in the State and ultimately Nationally. As a first in kind, local government backed innovation Lab this project aligns well with the UiLab mission to positively impact the Australian innovation ecosystem through strategic global partnerships and transformative projects such as this that will create high-value jobs, attract further investment, and ultimately improve National productivity.” - Dr Paul Mathiesen (UiLab Chief Innovation Officer).

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

/sites/default/files/prezly/images/Design%20sans%20titre%20%2821%29.png Documents [Prezly] Thales and the Underwood Innovation Labs sign a MOU to establish an AAM Centre of Excellence.pdf Contacts Alice Pruvot, Head of Media Relations, Aeronautics & Defense Australia & New Zealand Press Enquiries 18 Sep 2024 Type Press release Structure Aerospace Australia Thales and Underwood Innovation Labs signed a Memorandum of Understanding (MOU) to establish an AAM Centre of Excellence. The AAM COE, supported by the Mayor of Logan City, Hon Jon Raven, will operate as a membership-based, open-ecosystem, enabling organisations to utilize and access state-of-the-art innovation, technology and resources. prezly_689369_thumbnail.jpg Hide from search engines Off Prezly ID 689369 Prezly UUID 059c9fd1-1a11-4794-b0f6-a6562a0462b7 Prezly url https://thales-group.prezly.com/thales-australia-and-underwood-innovation-labs-sign-an-mou-to-establish-a-collaborative-advanced-air-mobility-aam-centre-of-excellence-inqueensland-australia Wed, 09/18/2024 - 08:00 Don’t overwrite with Prezly data Off

Tuesday, 17. September 2024

FindBiometrics

Uber Turns to Passenger Verification with New Prove Tech

Prove has launched a new identity assurance tool that is already being used by Uber, and appointed a new Chief Technology Officer to boot. The company’s new “Verified Users” system […]
Prove has launched a new identity assurance tool that is already being used by Uber, and appointed a new Chief Technology Officer to boot. The company’s new “Verified Users” system […]

SC Media - Identity and Access

ServiceNow ‘knowledge base’ misconfiguration leaks sensitive data

Security pros say KBs can be easily misconfigured – data on more than 1,000 KBs exposed.

Security pros say KBs can be easily misconfigured – data on more than 1,000 KBs exposed.


FindBiometrics

Acceptance of Azerbaijan Digital ID Expands with MNO Support

Azercell, Azerbaijan’s leading mobile operator, has expanded its acceptance of digital ID cards to all its authorized dealer stores and Azercell Exclusive offices. As of this week, customers can now […]
Azercell, Azerbaijan’s leading mobile operator, has expanded its acceptance of digital ID cards to all its authorized dealer stores and Azercell Exclusive offices. As of this week, customers can now […]

Safle Wallet

Safle Community Explorer Carnival: Your Epic Adventure Begins!

Ready to explore the future of Web3? The Safle Community Explorer Carnival is launching soon, bringing you an exciting series of challenges designed to unlock the full potential of Safle Wallet and Safle Lens. Each challenge takes you deeper into the Web3 universe, where you’ll explore new chains, discover groundbreaking dApps, and level up with valuable XP! 🌌 Compete to climb the leaderboar

Ready to explore the future of Web3? The Safle Community Explorer Carnival is launching soon, bringing you an exciting series of challenges designed to unlock the full potential of Safle Wallet and Safle Lens. Each challenge takes you deeper into the Web3 universe, where you’ll explore new chains, discover groundbreaking dApps, and level up with valuable XP! 🌌

Compete to climb the leaderboard and earn from a massive rewards pool in Safle Tokens! Don’t miss your chance to be a top explorer and shape the future of Web3!

Here’s a sneak peek at the action-packed quests coming your way:

🚀 Ignite the Safle Hype: The Saflenaut Journey Begins!

Get your engines roaring because the carnival is just around the corner — and guess what? YOU are the spark to ignite the buzz! Ready to suit up and blast off into the Web3 cosmos?

Think you’ve got your GAME ON? Welcome to the Saflenaut Mission — where your Web3 universe takes off. The more you rally, the bigger the adventure!

💥 Rootstock Troop

Gear up for an explosive mission on the Rootstock chain! Navigate, explore, and interact with dApps in a whole new way as you unlock the power of Safle Wallet’s latest integration. Adventure awaits those brave enough to take the plunge.

🚀 The BEVM Rocket

Strap in for a rocket-fueled journey to the BEVM chain! This isn’t just any mission — it’s your chance to discover how Safle Wallet takes cross-chain functionality to the next level. Ready to fire up those engines?

🏔 Avalanche Explorer

Prepare to conquer the Avalanche! Scale new heights and unlock powerful rewards as you interact with dApps in Safle Wallet. Are you ready to make your mark in the Avalanche ecosystem?

🔮 Polygon zkEVM Pioneer

The future of Web3 scalability is here, and YOU can be one of the first to explore it! Enter the Polygon zkEVM frontier and uncover the cutting-edge technology Safle has seamlessly integrated. Your pioneering spirit is about to be rewarded!

🌠 Base Voyager

Ever wanted to be a true explorer of the Base chain? Now’s your chance! Mint NFTs, engage in games, and experience the magic of Web3 on an entirely new level — all from the comfort of your Safle Wallet.

👁️ The Safle Lens Explorer

Prepare to see your portfolio like never before with Safle Lens! Whether it’s detecting spam tokens or NFT, interacting with our AI, or uncovering hidden gems, this quest will open your eyes to Safle’s most exciting features yet.

🏆 And There’s More!

Complete multiple quests, level up with multipliers, and claim your share of an airdrop worth 15k USD in USDT, Safle Tokens & RBTC! As you journey through the Carnival, the rewards will keep stacking up. The more you play, the bigger your prize!

This is no ordinary quest — it’s an epic adventure. Mark your calendars, gather your crew, and get ready to level up in the Safle universe. The Safle Community Explorer Carnival is about to go live… will you rise to the challenge?

Keep a lookout 👉🏻 Follow Safle

Join the community 👉🏻 Join Discord


FindBiometrics

FaceTec Clocks 2.6B Annual Liveness Checks, Other Milestones in 2024 Liveness Detection Security Report

FaceTec has published its 2024 “Liveness Detection Security Report”, detailing a number of achievements going back to the start of last year. The company tripled its Spoof Bounty Program to […]
FaceTec has published its 2024 “Liveness Detection Security Report”, detailing a number of achievements going back to the start of last year. The company tripled its Spoof Bounty Program to […]

ID Tech Digest – September 17, 2024

Welcome to ID Tech’s digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: DoD’s Upgraded ABIS Adds Voice Biometrics […]
Welcome to ID Tech’s digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: DoD’s Upgraded ABIS Adds Voice Biometrics […]

auth0

Auth0 Forms Is Now Generally Available!

We're excited to announce the general availability of Auth0 Forms, a powerful visual editor that empowers you to create custom, dynamic forms that integrate seamlessly with your authentication flows.
We're excited to announce the general availability of Auth0 Forms, a powerful visual editor that empowers you to create custom, dynamic forms that integrate seamlessly with your authentication flows.

FindBiometrics

Canada’s Conservative Party Proposes ‘Trustworthy’ Age Assurance Tech to Protect Minors Online

The Conservative Party of Canada has introduced a proposed online harms bill that aims to counter the Liberal government’s current digital legislation. The proposed bill, championed by MP Michelle Rempel […]
The Conservative Party of Canada has introduced a proposed online harms bill that aims to counter the Liberal government’s current digital legislation. The proposed bill, championed by MP Michelle Rempel […]

CBP Plans ‘Northern Border’ Pilot of Biometric Tech

U.S. Customs and Border Protection (CBP) is planning a new pilot demonstration of biometric technology at a northern border port of entry by the end of the year, according to […]
U.S. Customs and Border Protection (CBP) is planning a new pilot demonstration of biometric technology at a northern border port of entry by the end of the year, according to […]

France, Germany, Netherlands Won’t Meet November EES Deadline: Report

Officials from France, Germany, and the Netherlands have reportedly written to European Commissioner for Home Affairs Ylva Johansson to warn her that their countries will not be prepared for the […]
Officials from France, Germany, and the Netherlands have reportedly written to European Commissioner for Home Affairs Ylva Johansson to warn her that their countries will not be prepared for the […]

SC Media - Identity and Access

IntelBroker admits Experience Engine hack

Hackread reports that UK-based experiential marketing and promotional staffing service provider Experience Engine had its systems purportedly breached this month by IntelBroker, which has been peddling the stolen data on BreachForums.

Hackread reports that UK-based experiential marketing and promotional staffing service provider Experience Engine had its systems purportedly breached this month by IntelBroker, which has been peddling the stolen data on BreachForums.


Access Sports hack compromises over 88K

More than 88,000 individuals had their personal and health information stolen following a cyberattack against New Hampshire-based healthcare provider Access Sports Medicine & Orthopaedics in May, which has been claimed by the INC Ransom ransomware-as-a-service operation, SecurityWeek reports.

More than 88,000 individuals had their personal and health information stolen following a cyberattack against New Hampshire-based healthcare provider Access Sports Medicine & Orthopaedics in May, which has been claimed by the INC Ransom ransomware-as-a-service operation, SecurityWeek reports.


Stillwater Mining Company breach confirmed after RansomHub claims

Lone U.S. platinum and palladium mining firm Stillwater Mining Company had information from 7,258 employees confirmed to be compromised in a cyberattack in mid-June, which the RansomHub ransomware gang took responsibility for, according to The Record, a news site by cybersecurity firm Recorded Future.

Lone U.S. platinum and palladium mining firm Stillwater Mining Company had information from 7,258 employees confirmed to be compromised in a cyberattack in mid-June, which the RansomHub ransomware gang took responsibility for, according to The Record, a news site by cybersecurity firm Recorded Future.


Indicio

Choosing the right deployment for decentralized identity: Why Indicio offers SaaS as well as on-premise options

The post Choosing the right deployment for decentralized identity: Why Indicio offers SaaS as well as on-premise options appeared first on Indicio.

By Ken Ebert

As more decentralized identity and verifiable credential solutions get to market, many vendors only offer a Software-as-a-Service (SaaS) because of its ease of use and scalability. However, when it comes to managing verifiable credentials containing personal data, businesses, and especially governments, need to carefully assess where the platforms or software they depend on are hosted. In this blog, we’ll talk about how our platform for decentralized identity, Indicio Proven, supports requirements for data locality, compliance with regional regulations, and the security of personal data.

Assessment of data locality and regulatory compliance

Data residency is a key consideration when using a SaaS solution for verifiable credentials. A SaaS model for deployment may store or process data in multiple regions globally. While vendors often offer region-specific hosting, there are still challenges to ensuring that personal data is only processed in authorized geographic locations. This issue becomes even more pressing for government agencies and sectors dealing with sensitive citizen information, where the stakes for compliance are higher.

Governments around the world are beginning to operate under strict data sovereignty laws that dictate where personal data can be processed and stored. Regulations like the General Data Protection Regulation (GDPR) in the European Union, Australia’s Privacy Act, or Canada’s PIPEDA create stringent requirements for how personal data  is handled, especially when it comes to cross-border data flows.

For organizations in Europe, the eIDAS (Electronic Identification, Authentication and Trust Services) regulation is the framework shaping the future of digital identity. Compliance with eIDAS and other regional regulations requires careful attention to where and how sensitive data is processed and stored. 

For many organizations, the risks associated with using a SaaS model hosted in a foreign jurisdiction may outweigh the benefits, particularly if the service provider cannot guarantee that data will remain within the required geographical boundaries.

On-premise deployment: The case for control

For businesses and governments that require the strictest control over data processing, an on-premise deployment offers a secure alternative. This model allows organizations to manage verifiable credential platforms and solutions within their own environment, ensuring that sensitive personal data never leaves their infrastructure. In an on-premise deployment, verifiable credentials and the underlying issuance and verification infrastructure are fully managed, controlled, and protected by the organization, minimizing the risks of external breaches or compliance failures.

On-premise deployments are particularly appealing to financial services and healthcare, where stringent data protection regulations demand maximum control over personal data. 

Indicio’s Differentiator: Offering Both SaaS and On-Premises Solutions

Despite the clear advantages of on-premise deployment for critical data applications, few vendors offer on-premise deployment as an option. This is where Indicio stands out as a solution provider, with both SaaS and on-premises deployment options for businesses and governments to  meet their unique operational, privacy, and regulatory needs.

For those organizations that need the convenience and scalability of a cloud-based solution, Indicio Proven can be used as a fully-managed service. We handle the operational complexity of running the decentralized identity infrastructure, including regular maintenance, security updates, and compliance with global data protection regulations. This allows our clients to focus on their core operations while knowing that their verifiable credential solution is secure and up to date.

For organizations with stricter data-control requirements, Indicio Proven can be deployed on-premise to ensure that the  personal data in verifiable credentials is never processed or stored outside their control.

The benefits of Indicio’s flexible deployment approach

By offering both SaaS and on-premises deployment options, Indicio provides organizations with the flexibility to choose the model that works best for them. Here are the key benefits of working with Indicio:

1. Tailored to Your Needs: Whether your organization prioritizes the ease and scalability of SaaS or requires the security and control of on-premises, Indicio has a solution that fits. We understand that no two organizations are the same, and our dual deployment model ensures that you don’t have to compromise on security or convenience.

2. Operational Excellence: For our SaaS customers, Indicio takes on the full responsibility of managing the infrastructure for issuing and verifying credentials. We handle maintenance, upgrades, and security patches, ensuring that your system runs smoothly and securely at all times. Our superb customer service ensures that you receive the support you need when you need it.

3. On-premise control: For organizations that require more control, Indicio’s on-premises option allows them to manage their Indicio Proven instance  within their own environments. This deployment gives businesses and governments the ability to safeguard data, maintain compliance, and reduce risks associated with external data handling.

4. Regulatory compliance: Whether SaaS or on-premise, Indicio’s solutions are built with compliance in mind. We ensure that our systems meet the highest standards of security and data protection, giving you confidence that your decentralized identity solution will align with regulations like eIDAS, GDPR, and other regional frameworks.

Conclusion

As decentralized identity and verifiable credentials continue to shape the future of secure online interactions, businesses and governments must carefully evaluate their deployment options. SaaS models offer scalability and ease, but for organizations with stringent data control requirements, an on-premises deployment may be the best choice.

Indicio’s unique ability to provide both SaaS and on-premises solutions sets us apart in the market. Whether you need the operational simplicity of a managed SaaS environment or the control of an on-premises deployment, Indicio offers a flexible solution tailored to your needs, ensuring the security, compliance, and reliability of your decentralized identity infrastructure.

In an evolving regulatory landscape, Indicio is here to help you navigate the complexities of decentralized identity—offering superb customer service, operational excellence, and the flexibility to choose the deployment model that works best for you.

Contact us to learn more about how Indicio can support your verifiable credential deployment needs. 

###

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Choosing the right deployment for decentralized identity: Why Indicio offers SaaS as well as on-premise options appeared first on Indicio.


SC Media - Identity and Access

SC Award Winners 2024 Entitle  – Best Identity Management Solution

Entitle’s client growth has been impressive, underscored by partnerships with major players in finance and technology, including Man Group, Starburst, and Bloomreach.

Entitle’s client growth has been impressive, underscored by partnerships with major players in finance and technology, including Man Group, Starburst, and Bloomreach.


SC Award Winners 2024 WatchGuard Technologies – Best Authentication Technology

WatchGuard AuthPoint Wins Best Authentication Technology at the 2024 SC Awards, Reinforcing the Importance of Identity Security in a Zero-Trust World.

WatchGuard AuthPoint Wins Best Authentication Technology at the 2024 SC Awards, Reinforcing the Importance of Identity Security in a Zero-Trust World.


This week in identity

E57 - Back to School 2024 Episode

Summary In this episode of the Week in Identity podcast, Simon and David discuss the latest trends and developments in identity security, including market activity, funding rounds, and significant acquisitions. They delve into the importance of NIST guidelines, the rise of non-human identity (NHI), and the implications of recent acquisitions by MasterCard and Salesforce. The conversation highlig

Summary

In this episode of the Week in Identity podcast, Simon and David discuss the latest trends and developments in identity security, including market activity, funding rounds, and significant acquisitions. They delve into the importance of NIST guidelines, the rise of non-human identity (NHI), and the implications of recent acquisitions by MasterCard and Salesforce. The conversation highlights the evolving landscape of identity management and the critical need for organizations to adapt to new challenges in cybersecurity.


Chapters

00:00 Introduction to the Week in Identity Podcast

03:52 NIST Guidelines and Identity Assurance

06:30 Aembit Funding Rounds and Non-Human Identity

13:42 Acquisitions in Identity: IndyKite and 3Edges

20:17 MasterCard and Recorded Future

26:39 Salesforce and Own Data







KuppingerCole

Building Resilient IAM Systems: The Limits of IGA Customization

by Martin Kuppinger Customization vs. Configuration: Let’s Clarify  First, let’s clarify what we mean by “customization.” Customization involves writing new code—whether through traditional coding, low-code, or no-code platforms. Configuration, on the other hand, refers to adjusting settings within the system, ideally through the user interface or, if necessary, via configuration files. Wh

by Martin Kuppinger

Customization vs. Configuration: Let’s Clarify 

First, let’s clarify what we mean by “customization.” Customization involves writing new code—whether through traditional coding, low-code, or no-code platforms. Configuration, on the other hand, refers to adjusting settings within the system, ideally through the user interface or, if necessary, via configuration files. While low-code/no-code approaches have gained popularity, they don’t entirely mitigate the risks associated with customization, especially without proper documentation, version control, and staging environments in place. 

Why Customize IGA Solutions at All? 

The first and most important questions to ask are: Do we need customization in IGA solutions, and to what extent? These are two separate questions. Based on my experience, the amount of customization typically required is far less than many organizations assume. 

Most IAM processes, including the management of Joiner, Mover, Leaver (JML) activities, can be standardized. Yes, there are variations and organization-specific requirements, but these are often at the detail level: How many approvers are required? Should approvals be sequential or parallel? Even these specifics can often be addressed using best practices. Several vendors provide process frameworks, or you can consult experts for tailored frameworks that align with your organization’s needs. 

At the core, every organization needs to onboard employees, manage their access, handle job transitions, and de-provision access when necessary. These are universal requirements, and best practices can address them efficiently. Yet, many organizations still customize excessively, resulting in unnecessary complexity and cost. 

The Real Reasons for Customization 

There are several reasons organizations end up with highly customized IGA solutions: 

Legacy Processes: Many organizations are reluctant to let go of legacy processes, opting to map outdated workflows onto new systems. Worse, when organizations have multiple sites with their own “ways of doing things,” customization often spirals out of control.  Lack of Standard Frameworks: While process frameworks exist, not enough vendors offer them out-of-the-box, forcing organizations to build their own—often from scratch.  System Integrators: Cynics might argue that system integrators benefit from customization projects. However, this overlooks the downsides: dissatisfied customers, extended project timelines, and increased risk.  Does Switching Tools Solve the Problem? 

Many organizations, when faced with a failing IAM (IGA) system, rush to replace the tool. While a tool change might seem like the solution, it rarely is. The problem usually lies in the approach to customization rather than in the tool itself. Even IDaaS, which inherently supports less customization, only mitigates the issue to a certain extent. 

A well-functioning IGA system doesn’t begin with the tool. It begins with clearly defined policies, processes, and organizational requirements. In projects that suffer from over-customization, the underlying issue is often the absence of well-documented processes. Without this groundwork, simply switching tools won’t help. 

Customization: When and How 

I’m not suggesting that customization is entirely unnecessary. There will always be specific needs that require customization. The key is to minimize unnecessary modifications and do it the right way when needed. 

Rethink Processes: Before diving into customization, take a step back and critically evaluate your processes. Do you really need that custom approval workflow, or is there a best practice you can adopt?  Avoid Backend Coding: A frequent source of trouble in IGA projects arises from coding directly against the backend, such as databases. If the database structure changes in a software update, the custom code breaks. Instead, work through APIs or create an abstraction layer to keep customizations stable.  Segregate Custom Code: Modern IGA solutions provide extensive API support and container-based deployments. Custom code should reside in microservices, consuming the APIs of your IGA system. This ensures that updates to the core system don’t break your custom code. Even if the API changes, the impact is isolated to the specific microservice, minimizing disruptions.  Three Steps to Successful IAM (IGA) Customization 

To ensure your IGA solution withstands necessary customization without failing, follow these steps: 

Define Policies and Processes First: Ensure your processes are thoroughly documented and follow best practices before even considering customization.  Minimize Unnecessary Customization: Many customizations provide little real benefit. Focus on what truly adds value to your organization.  Follow Best Practices in Coding: Build customizations on the Identity API layer of your Identity Fabric, isolate them in microservices, and ensure proper documentation and versioning. 

By following these guidelines, you can deliver an IGA solution that meets your organization’s needs while avoiding the risks and costs of over-customization.


Northern Block

Why Northern Block is Joining the Global Acceptance Network

Northern Block joins the Global Acceptance Network to solve governance challenges and build trust across digital ecosystems. The post Why Northern Block is Joining the Global Acceptance Network appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post Why Northern Block is Joining the Global Acceptance Network appeared first on Northern Block | Self Sovereign Ident

At Northern Block, we are thrilled to announce our participation as a founding member in the newly established Global Acceptance Network (GAN). This initiative is a crucial step towards solving one of the biggest challenges we face in the digital world: the lack of trust in digital interactions.

Think about how seamlessly payments work in the physical world. When you see a Visa logo at a merchant’s point of sale, you immediately know that your Visa card will be accepted. You don’t hesitate to tap your card on the terminal. Unfortunately, we don’t yet have the same level of confidence when it comes to online interactions.

Today’s digital interactions, especially those involving sensitive information like login credentials or payment details, are often fraught with spam, abuse, and fraud. We frequently find ourselves unsure if the transactions we’re engaging in are legitimate. Whether it’s receiving out-of-band communications through SMS or email from organisations claiming to need something urgent from us—often playing on our emotions to compromise our security—we face constant uncertainty. On the other hand, organisations are striving to put their customers at the centre by creating more personalised and seamless experiences, and there’s no better way to achieve this than by obtaining data directly from the source: their customers. However, they need to trust that the data provided has integrity. Without this trust, businesses are forced to implement duplicate verification processes for all their customers, adding friction to the experience and undermining digital transformation efforts.

At Northern Block, we recognized this trust gap early on, which is why we became a founding member of the Trust over IP Foundation in 2020. Our goal wasn’t just to build better technologies but to apply the governance frameworks necessary to solve human trust problems in the digital world. While we’ve made great strides in achieving cryptographic trust—this only solves part of the problem.

Over the past few years, the Trust over IP Foundation has produced significant thought leadership and numerous deliverables, contributing greatly to the evolution of digital trust. Among these achievements, two major innovations stand out as particularly relevant to the Global Acceptance Network:

The Trust Registry Query Protocol: This allows any entity to interact with a trust registry by asking a simple question: “Does Entity X have Authorization Y, in the context of Ecosystem Governance Framework Z?” The Governance Framework Metamodel and toolkit: These tools help capture and implement governance for ecosystems, which have already been successfully deployed in initiatives such as Bhutan’s National Digital Identity Ecosystem and the Global Legal Entity Identifier Foundation (GLEIF).

The Global Acceptance Network builds on the progress made by the Trust over IP Foundation by putting its frameworks into action. While numerous ecosystems today leverage various forms of credentialing and could benefit from sharing data or credentials with others, the real challenge lies in establishing governance standards that ensure these exchanges are trustworthy. This is where GAN comes in.

Much like Visa connects banks, merchants, and consumers within a trusted payment network, GAN’s purpose is to connect digital ecosystems. However, unlike Visa, GAN is not a centralised network and cannot operate as one. Instead, its strength lies in developing relationships with ecosystems and making specific claims about these ecosystems—claims that GAN is uniquely positioned to verify. These claims won’t be about the internal governance or authorities within an ecosystem, but rather about the ecosystem itself and its conformance to GAN’s trust criteria. Over time, as ecosystems are recognised by GAN or linked to the GAN network, the hope is that people and organisations will view these ecosystems as trusted entities, similar to how we implicitly trust the Visa network when we see its logo.

GAN’s ultimate goal is to solve human trust and governance problems by reducing the risks involved in accepting digital credentials or data from outside an organisation’s own ecosystem. This vision is closely aligned with the one we had when the Trust over IP Foundation was formed: a future with thousands of interconnected ecosystems, each with their own governance frameworks. GAN will act as a connector, ensuring that these ecosystems can interact and exchange trusted data, enabling secure, frictionless interactions—just like when we confidently tap our Visa cards at the checkout.

At Northern Block, we provide digital trust solutions that enable ecosystems to produce and manage valuable credentials. As demand for these credentials grows across ecosystems—something the Global Acceptance Network (GAN) can facilitate—the value for our customers increases. Additionally, as a provider of trust registry solutions, which support data models linked to ecosystem authorities and for registry of registries, we aim to ensure that these registries can establish relationships with the GAN trust registry. This further enhances the value and interoperability of the ecosystems we support, driving greater trust and value.

The post Why Northern Block is Joining the Global Acceptance Network appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post Why Northern Block is Joining the Global Acceptance Network appeared first on Northern Block | Self Sovereign Identity Solution Provider.


Thales Group

Thales joins the CAC 40 ESG index

Thales joins the CAC 40 ESG index prezly Tue, 09/17/2024 - 14:00 The inclusion of Thales in this index reflects the Group's accelerating progress in terms of social and environmental responsibility. Designed according to the highest international standards, the Group's CSR policy is at the heart of its strategy and perfectly in line with its corporate purpose, adopted in 2020: “Buil
Thales joins the CAC 40 ESG index prezly Tue, 09/17/2024 - 14:00

The inclusion of Thales in this index reflects the Group's accelerating progress in terms of social and environmental responsibility. Designed according to the highest international standards, the Group's CSR policy is at the heart of its strategy and perfectly in line with its corporate purpose, adopted in 2020: “Building a future of we can all trust”.

“We are proud of Thales's inclusion in the CAC 40 ESG index. This is a strong endorsement by the financial community of our extra-financial performance and of our contribution to the protection of society, the planet and individuals,” says Isabelle Simon, General Secretary of Thales.

In 2023, Thales met or exceeded the 6 objectives of its CSR strategy, as defined in 2019 and then revised upwards in 2021:

52% reduction in operational CO2 emissions since 2018 100% deployment of eco-design in new product developments 20.4% women in management positions 86.8% of Group management committees include at least 3 women 100% of exposed employees trained in anti-corruption every two years 36.7% reduction in lost-time accident frequency rate since 2018

In 2025, the Group will unveil a new Horizon 2030 CSR roadmap.

For more information: Thales – Integrated Report 2023-2024 (thalesgroup.com)

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

/sites/default/files/prezly/images/sans%20A-1920x480px_45.jpg Documents [Prezly] Thales joins the CAC 40 ESG index.pdf Contacts Head of Media Relations Alexandra Boucheron - Thales, Analysts/Investors 17 Sep 2024 Type Press release Structure Investors Group Thales will be included in the CAC 40 ESG index as of market close on Friday, September 20, 2024. This index is designed to direct capital flows to the top 40 French companies in the CAC® Large 60 index demonstrating the best environmental, social and governance (ESG) practices. prezly_689535_thumbnail.jpg Hide from search engines Off Prezly ID 689535 Prezly UUID 30256236-852f-4bc4-a049-d73a0e11c70e Prezly url https://thales-group.prezly.com/thales-joins-the-cac-40-esg-index-7wqqh2 Tue, 09/17/2024 - 16:00 Don’t overwrite with Prezly data Off

Thales Australia’s Lithgow Arms partners with Våbenfabrikken to establish Danish small arms industrial capability

Thales Australia’s Lithgow Arms partners with Våbenfabrikken to establish Danish small arms industrial capability prezly Tue, 09/17/2024 - 12:00 On 17 September 2024, Thales Australia and Denmark’s Våbenfabrikken announced that they are ​ entering into a strategic cooperation to establish a new industrial capability in Denmark to produce NATO interoperable small arms. The cooper
Thales Australia’s Lithgow Arms partners with Våbenfabrikken to establish Danish small arms industrial capability prezly Tue, 09/17/2024 - 12:00 On 17 September 2024, Thales Australia and Denmark’s Våbenfabrikken announced that they are ​ entering into a strategic cooperation to establish a new industrial capability in Denmark to produce NATO interoperable small arms. The cooperation, and resulting outcomes, signal for the first time since the 1960’s that military assault rifles will be produced in Denmark, providing an industrial capability to produce, maintain and sustain small arms in this country. ​ The first tranche will explore options for small arms production in Denmark; commencing with a Danish version of the Australian Combat Assault Rifle (ACAR). The ACAR is currently under development by Thales Australia, and is based on a proven design in-use with allied defence forces and law enforcement agencies. The cooperation between Thales and Våbenfabrikken aligns with the overall Danish defence strategy and supports Thales’s ambition to partner with local industry and develop sovereign supply chains to the benefit of our customers.
©Thales

Under the terms of a Memorandum of Understanding (MoU), Thales and Våbenfabrikken will work together to establish a new industrial capability in Denmark with the aim of producing, maintaining and sustaining interoperable small arms in Denmark. The MoU was signed on 17th of September 2024 at an official signing ceremony at Våbenfabrikken’s premises in Denmark.

Våbenfabrikken is an established gunsmith and weapons training provider, with decades of cumulative experience in the industry. By partnering with Thales Australia, Våbenfabrikken’s ability to support Danish national security priorities will be enhanced through sovereign small arms production, local maintenance capabilities and future skills development.

“We are very excited to work with Thales to bring a NATO small arms production and through-life-support capacity to Denmark for the first time in nearly 60 years. This cooperation is a real opportunity for Våbenfabrikken to grow, in terms of product offerings and staff, to better respond to the priorities and needs of Danish National Security. The DACAR (Danish/Australian Combat Assault Rifle) is a powerful capability and, once in country, over time it will offer additional export opportunities for Denmark.”, says Kim Wiencken, Chairman of the Board of Våbenfabrikken.

“This agreement is the culmination of mutual effort, investment and trust between Våbenfabrikken, Thales Australia and Thales Denmark. This cooperation brings opportunities for both Australia, in respect to regional manufacturing, and Denmark through the provision of small arms assembly, sustainment and maintenance. We’re looking forward to working closely with Våbenfabrikken in the coming years.”, said Matt Duquemein, Director Integrated Weapons System, Thales Australia.

“There is great development in the Danish defence industry, with promising new companies, and it has always been part of Thales’s DNA to support the local defence industry in order to maximise the benefit for our customers. Bringing the Australian Combat Assault Rifle (ACAR) to Denmark is a step towards creating a sovereign small arms capability to support the Danish MoD in the future. With this important cooperation, Thales and Våbenfabrikken will be committed to strengthening the local defence industrial footprint in support of overall Danish national security and security of supply in key areas.” said Martin Soegaard, CEO ofThales Denmark.

/sites/default/files/prezly/images/sans%20A-1920x480px_43.jpg Documents [Prezly] 2024_09_17_ PR_Thales Australias Lithgow Arms partners with Våbenfabrikken to establish Danish small arms industrial capability.pdf Contacts Camille Heck, Thales, Media Relations Land & Naval Defence Anne Sofie Hüttemeier, Communication Manager, Northern & Central Europe 17 Sep 2024 Type Press release Structure Australia Denmark Under the terms of a Memorandum of Understanding (MoU), Thales and Våbenfabrikken will work together to establish a new industrial capability in Denmark with the aim of producing, maintaining and sustaining interoperable small arms in Denmark. The MoU was signed on 17th of September 2024 at an official signing ceremony at Våbenfabrikken’s premises in Denmark. prezly_689548_thumbnail.jpg Hide from search engines Off Prezly ID 689548 Prezly UUID 28b79d9e-1f6d-4ce5-9704-418adfe7adcc Prezly url https://thales-group.prezly.com/thales-australias-lithgow-arms-partners-with-vabenfabrikken-to-establish-danish-small-arms-industrial-capability Tue, 09/17/2024 - 14:00 Don’t overwrite with Prezly data Off

KuppingerCole

AI in Cybersecurity: Risks and Opportunities

by Alexei Balaganski AI is often hailed as the ultimate tool for addressing cybersecurity challenges, but what happens when hype collides with reality? The meteoric rise of generative AI has captured the imagination of the public. From writing essays to producing art, AI can seemingly do anything. But can it really tackle the complex issues of cybersecurity effectively? Let’s start with the ele

by Alexei Balaganski

AI is often hailed as the ultimate tool for addressing cybersecurity challenges, but what happens when hype collides with reality? The meteoric rise of generative AI has captured the imagination of the public. From writing essays to producing art, AI can seemingly do anything. But can it really tackle the complex issues of cybersecurity effectively?

Let’s start with the elephant in the room: ChatGPT is not the pinnacle of artificial intelligence that many believe it to be. In fact, what we often mistake for the GenAI model’s competence is just its astonishing ability to instantly generate a response that sounds coherent and plausible, courtesy of billions of digital monkeys with typewriters.

Unfortunately, what these monkeys are still lacking is the honesty to admit that they don’t know something. Instead, they will happily generate pages of plausibly sounding nonsense (in the industry, this is politely referred to as “hallucinations”). To quote an article I read recently: “For decades, we were promised artificial intelligence. What we got instead is artificial mediocrity.”

Beyond the Hype: The Limits of Large Language Models in Cybersecurity

While ChatGPT may seem like an all-powerful assistant, it is not designed for or particularly good at many of the tasks necessary in cybersecurity. Large language models can write code, analyze texts, and even assist in decision-making, but their potential applications in a high-stakes field like cybersecurity must be approached with careful consideration.

Generative AI thrives on massive datasets. But in cybersecurity, those datasets often contain sensitive, confidential information that you would rather not share with an external model housed in a cloud data center. Add to that the huge computational overhead that these models require, and we are left with an unsustainable approach in the long term. Imagine the environmental costs: running LLMs with cutting-edge encryption, like fully homomorphic encryption, would take us closer to a climate catastrophe than Bitcoin mining ever did.

So, does this mean AI has no role in cybersecurity? Absolutely not. But we need to distinguish between what is hype and what is practical, scalable, and trustworthy.

Practical AI Use Cases in Cybersecurity: What Really Works

Long before ChatGPT was even a concept, machine learning (ML) techniques were already a staple in cybersecurity tools. From anomaly detection to behavioral analytics, AI-driven methods have long been applied to analyze large datasets and identify outliers that might signify a security breach.

The technology behind detecting anomalies, for instance, has been around for decades, well before the GenAI boom. It’s based on statistical methods that have been refined over the years. But here’s where things get tricky - detecting an anomaly is one thing, but determining whether that anomaly poses a real threat is quite another. With traditional methods, you may end up with a flood of anomalies, but with no real insight into which of them demand immediate action.

The most advanced AI/ML tools today do more than just identify anomalies. They correlate them with known attack vectors, connect them to a specific threat framework like MITRE ATT&CK®, and even provide detailed threat artifacts that can be used for further analysis. The real challenge is not in detection, but in correlation, for example, in figuring out which vulnerabilities are actually exploitable in your specific environment. All of this makes for a robust threat detection mechanism, but none of it requires the power of generative AI.

Behavioral Analytics: The Long Game in Cybersecurity

Another area where AI/ML shines is in behavioral analytics - tracking user and system behavior over extended periods to identify potential security risks. But again, this is not the domain of ChatGPT. Traditional ML methods are more than capable of profiling behaviors, identifying deviations from the norm, and flagging potential threats based on those deviations.

The challenge in behavioral analytics is not the technology itself – it is the data. To be effective, behavioral AI tools need access to large, diverse datasets. This is why the most effective solutions come from vendors who operate massive security clouds, collecting behavioral data from a wide range of users, systems, and geographies.

What’s key to understand here is that this method requires continuous learning over time. Unlike the hype around instant results from LLMs, behavioral analytics relies on consistent, long-term data collection to provide meaningful insights.

Threat Intelligence: Where an LLM Can Truly Make a Difference

Knowing your enemy is a major factor in any kind of warfare, not just in cybersecurity. However, in cybersecurity, this struggle is especially unfair – thousands if not millions of malicious actors are out there against us, and somehow, we must collect enough intelligence about them to understand their methods, techniques, and motives.

Unsurprisingly, the Threat Intelligence industry is growing rapidly - both cybersecurity vendors and customers are in constant need of every bit of information that can give them an advantage in defending against the next cyberattack. Unfortunately, a lot of this information is highly unstructured and difficult to quantify. Entire teams of security researchers spend their days trawling the dark web for bits of intelligence about malicious actors.

Natural language processing capabilities of LLMs can dramatically increase their productivity. These AI models can directly interpret textual data like threat reports, social media, and forum posts to assess emerging risks, correlate them with data from different sources, and thus provide up-to-date insights into global cyber threats.

Can AI Handle Automated Incident Response?

One of the most controversial promises of AI in cybersecurity is the potential for automated incident response. In theory, AI could detect a threat and neutralize it without human intervention. In practice, though, there’s a significant trust gap. Many companies remain wary of handing over control of their incident response processes to an AI, no matter how advanced. A poorly designed AI could do more harm than good: imagine it shutting down critical manufacturing systems because it misinterpreted a benign anomaly as a serious threat.

However, we are seeing a shift in attitudes. The explosion of ChatGPT’s popularity has made organizations more open to the idea of AI taking on more responsibility in their security operations. But it’s a gradual process. Many companies are opting for a phased approach, first using AI in a “dry run” mode, where it identifies threats but does not take action. Only after extensive testing do they move to a more automated setup.

But even with this cautious approach, the question remains: should we trust AI to make these decisions for us? In most cases, the answer is still no; at least, not without significant oversight from human operators.

Finding the Balance Between Technology, Risk, and Trust

AI undoubtedly has a role to play in the future of cybersecurity, but we need to keep our expectations grounded in reality. Generative AI is not the silver bullet that many make it out to be - it’s useful in specific contexts, but far from a game-changer in cybersecurity. Instead, we should focus on leveraging the right kind of AI for the right tasks.

As with any emerging technology, trust is earned, not given. In cybersecurity, where the stakes are high, it’s crucial to proceed with caution, ensuring that AI is used to complement human expertise rather than replace it. After all, AI may help us detect threats faster, but it’s human judgment that ultimately keeps our systems safe.

If you’re interested in learning more about AI applications from real human experts, you might consider attending the upcoming cyberevolution conference that will take place this December in Frankfurt, Germany. AI risks and opportunities will be one of the key topics discussed there.


Ontology

Inland Revenue’s Data Breach and Why Web3 Security Needs Decentralized Identity

The recent Inland Revenue data breach serves as a stark reminder of the fragility of centralized systems. When large organizations — whether they be governments, corporations, or tech giants — are responsible for housing vast amounts of sensitive data, a single error can have catastrophic consequences. In this case, it’s tax information. But the implications go much deeper. We’ve seen time a

The recent Inland Revenue data breach serves as a stark reminder of the fragility of centralized systems. When large organizations — whether they be governments, corporations, or tech giants — are responsible for housing vast amounts of sensitive data, a single error can have catastrophic consequences. In this case, it’s tax information. But the implications go much deeper.

We’ve seen time and again how centralized structures, a hallmark of Web2, fail to protect data adequately. Whether through technical vulnerabilities or human error, the result is the same — your personal information is left exposed. This isn’t just about tax records, passwords, or email addresses getting into the wrong hands. It’s about trust. And when that trust is broken, it takes years to rebuild, and we’ve all become painfully aware of how fragile that trust is in today’s digital age.

This is where decentralized identity (DID) comes in. DID flips the script, handing control back to individuals rather than institutions that often mismanage data. With decentralized identity systems, your personal information is no longer stored in a vulnerable central server; it’s distributed across a secure, immutable blockchain. You decide who gets access to your data and under what terms. You own it, you control it, and you can revoke access whenever you want.

Web3 security technologies like Zero Knowledge Proofs, Self-Sovereign Identity, and decentralized storage solutions enable this shift. Instead of depending on a tax department or a tech giant to safeguard your data, you control every aspect of its distribution. Inland Revenue’s mishap should be a wake-up call, a signal that centralized systems are not built for the digital age we now inhabit. The centralized Web2 world is riddled with single points of failure, and as we become more reliant on digital systems, these failures become not just likely but inevitable.In contrast, decentralized systems are trustless by design. You don’t need to trust an organization or a government to protect your data because the system itself is built on cryptographic proofs that ensure privacy and security. It’s about data sovereignty — taking back control over the very information that defines us.

Inland Revenue’s slip-up highlights a deeper truth: centralized data management is outdated and dangerous. The promise of Web3 is a system where users are empowered, not at the mercy of flawed institutions. This isn’t just an evolution in technology; it’s a fundamental shift in how we interact with and protect our personal information. The time has come to embrace decentralized systems, where security, privacy, and control are no longer luxuries but basic rights.Are we ready to leave behind the vulnerabilities of Web2? The Inland Revenue incident suggests we don’t have much of a choice.

Interested in learning more about decentralized identities? Explore Ontology’s decentralized identity solutions and see how we’re building the future of trust.

Inland Revenue’s Data Breach and Why Web3 Security Needs Decentralized Identity was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 16. September 2024

FindBiometrics

ID Tech Digest – September 16, 2024

Welcome to ID Tech’s digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: Rwanda Pilots Biometric SIM Card Registration […]
Welcome to ID Tech’s digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: Rwanda Pilots Biometric SIM Card Registration […]

Russia Expands Biometric Fare Payment System Beyond Moscow Metro

Russia has begun expanding its facial recognition payment system, known as Face Pay, to subways in cities outside of Moscow, including Kazan and Nizhny Novgorod. The system allows passengers to […]
Russia has begun expanding its facial recognition payment system, known as Face Pay, to subways in cities outside of Moscow, including Kazan and Nizhny Novgorod. The system allows passengers to […]

Alaska Awards Thales Contract for Next-Gen Driver’s License and ID Cards

Alaska has awarded Thales a contract to produce the state’s new generation of secure driver’s licenses and ID cards featuring polycarbonate technology. This marks the second consecutive contract between Thales […]
Alaska has awarded Thales a contract to produce the state’s new generation of secure driver’s licenses and ID cards featuring polycarbonate technology. This marks the second consecutive contract between Thales […]

Thales Group

Guillermo Roselló Massa, Nuevo Director General del Área de Defensa en Thales

Guillermo Roselló Massa, Nuevo Director General del Área de Defensa en Thales Language English omnia.anis Mon, 09/16/2024 - 17:09 Guillermo Roselló se incorpora a Thales para liderar el área de defensa de la multinacional tecnológica en España.  Roselló asumirá la dirección general de la filial española sustituyendo a José Sarnito, que deja el cargo
Guillermo Roselló Massa, Nuevo Director General del Área de Defensa en Thales Language English omnia.anis Mon, 09/16/2024 - 17:09

Guillermo Roselló se incorpora a Thales para liderar el área de defensa de la multinacional tecnológica en España. 
Roselló asumirá la dirección general de la filial española sustituyendo a José Sarnito, que deja el cargo después de más de 15 años al frente del área de defensa en España.

Roselló cuenta con una amplia experiencia en el sector aeroespacial, especialmente en gestión de grandes proyectos internacionales. Su paso por la Administración Pública y por organismos multilaterales como la OTAN, unido a su formación militar le permiten tener un conocimiento profundo de todos los grupos de interés en el sector de defensa.

Con esta incorporación el Grupo Thales avanza en su compromiso con España donde espera crecer en el ámbito de defensa y seguridad, en ciberseguridad y en seguridad digital en los próximos años. 

Sobre Thales 

El Grupo Thales es líder mundial en tecnologías avanzadas especializado en tres sectores de negocio: Defensa & Seguridad, Aeronáutica & Espacio, y Ciberseguridad & Identidad Digital. Desarrolla productos y soluciones que contribuyen a que el mundo sea más seguro, más verde y más inclusivo.

Con una inversión de 4.000 millones de euros al año en Investigación y Desarrollo, especialmente en áreas clave como las tecnologías cuánticas, Edge computing, 6G, etc…Thales cuenta con 81.000 empleados en 68 países. En 2023, el Grupo generó unas ventas superiores a los 18. 400 millones de euros.

En España, Thales tiene una fuerte presencia de más de treinta años y emplea a más de 1.250 profesionales altamente cualificados en el ámbito aeroespacial, de defensa, de seguridad digital y de ciberseguridad.

Thales cuenta 13 centros de trabajo en España distribuidos por todo el territorio. En Madrid, Thales alberga el centro de competencia a nivel mundial de biometría Border&Travel, el centro de competencia en espacio en Tres Cantos, y, en el ámbito de ciberseguridad destaca el SOC (Security Operation Center) desde el que se gestionan las ciberamenazas de los clientes en el Sur de Europa. 
 

16 Sep 2024 Spain Type News Hide from search engines Off

Thales and SEL unite to safeguard UK’s energy future with landmark smart grid laboratory

Thales and SEL unite to safeguard UK’s energy future with landmark smart grid laboratory Language English simon.mcsstudio Mon, 09/16/2024 - 17:01 Thales has joined forces with SEL, a global leader in power system protection, automation and control solutions, to protect the next generation of their technologies. The collaboration was marked
Thales and SEL unite to safeguard UK’s energy future with landmark smart grid laboratory Language English simon.mcsstudio Mon, 09/16/2024 - 17:01

Thales has joined forces with SEL, a global leader in power system protection, automation and control solutions, to protect the next generation of their technologies.

The collaboration was marked by the launch of a state-of-the-art ‘smart grid laboratory’ at Thales’ UK Cyber Resilience Lab in Ebbw Vale, South Wales, providing tailored solutions and expertise to support SEL’s cybersecurity needs. This comes as critical national infrastructure (CNI) organisations, including smart grids, are increasingly targeted by cybercriminals. 

SEL will make use of Thales’ laboratory facilities to undertake:

Cybersecurity Training: Thales and SEL will provide cybersecurity training and workshops for electrical utilities and critical infrastructure operators, covering system hardening, vulnerability and risk assessments, network design, and intrusion detection strategies. Attack Simulations: The lab will exercise realistic cyberattacks and threats found in the OT environment, including process bus and time-synchronisation attacks. Using real-time data in a secure, offline environment, these simulations allow operators to assess and improve their cyber-threat preparedness. Research & Development: The facilities will support testing and research on secure-by-design solutions and the resilience of complex cyber-physical systems across critical infrastructure in conjunction with academic institutions. Product Demos: It will give both Thales and SEL valuable opportunities to perform test-and-learn demonstrations across a range of technologies for electrical utilities and critical national infrastructure operators.

During the launch event, Thales ran several live demonstrations of cyberattack simulations and training exercises for a select audience, including Welsh government officials, and senior representatives from key infrastructure operators and regulators.

Tony Burton, Managing Director Cybersecurity & Trust, Thales UK said:

This is a milestone partnership in terms of protecting the future of the UK’s energy supplies and critical infrastructure. The stakes for securing smart grids are incredibly high. Their future resilience requires robust cybersecurity measures to be “designed in” to the infrastructure and comprehensively tested and exercised, and that’s where Ebbw Vale comes into play. Regularly stress-testing infrastructure and raising awareness levels is the only way to truly understand the threat and build effective mitigations Thales then proactively monitors systems with advanced threat intelligence and detection tools, responding quickly and effectively to attacks by alerting key stakeholders and communicating vital intelligence necessary for decisive response.

Spokesperson, Gerardo Urrea, Senior Vice President of Sales and Customer Service, SEL: “This modern Smart Grid Laboratory provides invaluable opportunities for critical infrastructure operators to better understand and address potential vulnerabilities,” said Gerardo Urrea. “Cybersecurity has been an SEL focus from the beginning, and we are pleased to partner with Thales and to be able to serve the electric power industry, and others, in this way."

/sites/default/files/database/assets/images/2024-09/Thales-and-SEL-unite-500px-AdobeStock_001592843-Banner.jpg 16 Sep 2024 United Kingdom Thales has joined forces with SEL, a global leader in power system protection, automation and control solutions, to protect the next generation of their technologies… Type News Hide from search engines Off

Trinsic Podcast: Future of ID

Calvin Fabre - Envoc's Role in Pioneering Mobile Driver’s Licenses in Louisiana

In this episode, I’m joined by Calvin Fabre, President and Founder of Envoc, a company that has been at the heart of mobile driver's license (mDL) innovation in Louisiana, a state leading the nation in mDL adoption. Calvin shares the fascinating story of how his company helped bring the country’s first digital driver’s license into reality, starting with a simple idea for a “digital glove box.” W

In this episode, I’m joined by Calvin Fabre, President and Founder of Envoc, a company that has been at the heart of mobile driver's license (mDL) innovation in Louisiana, a state leading the nation in mDL adoption. Calvin shares the fascinating story of how his company helped bring the country’s first digital driver’s license into reality, starting with a simple idea for a “digital glove box.”

We dive into a variety of topics, including:

- The journey from bidding on payment processing systems to developing a groundbreaking MDL system for the Louisiana DMV
- How Envoc navigated the complexities of legislation and law enforcement adoption to make digital driver's licenses legal for routine traffic stops
- The importance of user feedback in expanding the LA Wallet app to include hunting licenses, concealed carry permits, and even COVID-19 vaccine cards
- The unique role LA Wallet has played in verifying identity remotely, including for disaster relief and online age verification for adult content
- Insights on the future of digital credentials, from frictionless onboarding to the growing adoption of MDLs in industries like banking and retail

Calvin’s expertise offers a deep dive into the future of identity and digital credentials, making this episode a must-listen for anyone interested in the intersection of technology, law enforcement, and secure digital identification.

You can learn more about Envoc at envoc.com.

Subscribe to our weekly newsletter for more announcements related to the future of identity at trinsic.id/podcast

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.


Caribou Digital

Breaking down power imbalances through co-creation

Written by Chelsea Horváth, Measurement & Impact Manager, and Grace Natabaalo, Research & Insights Manager, both at Caribou Digital. Co-creation has become an increasingly important topic and practice within the research, evaluation, and development communities. Like many others in our community of practice, at Caribou Digital, we’re reflecting on co-creation in our work. At first g

Written by Chelsea Horváth, Measurement & Impact Manager, and Grace Natabaalo, Research & Insights Manager, both at Caribou Digital.

Co-creation has become an increasingly important topic and practice within the research, evaluation, and development communities.

Like many others in our community of practice, at Caribou Digital, we’re reflecting on co-creation in our work. At first glance, co-creation seems simple enough — create something with others.

But when the rubber hits the road, sticky questions arise. Who needs to be involved? What information is shared and how? How much time and resources are required to co-create? How is consensus reached? Who makes the final decision? Through trial and error and learning from others in the field, we’d like to share our experience and lessons on co-creation within research.

Caribou Digital’s approach to co-creation

At Caribou Digital, we understand co-creation to be an “approach that brings people together to collectively produce a mutually valued outcome and that involves a participatory process assuming some degree of shared power and decision-making.”

At conferences and in requests for proposals, we often see that co-creation is confused with collaboration (see the table below created by the authors).

The key differences between the two can be found in the definition above: breaking down power structures and decision-making. Without time and resources dedicated to those aspects, attempts at co-creation become more like collaboration.

A table outlining the differences between consultation, collaboration, and co-creation. Using co-creation to center young people as experts in their own digital futures

In partnership with the Mastercard Foundation, Caribou Digital researched young people’s experiences with digital technologies in Africa, selecting 20 young people from across seven countries to co-create with. They included young people whose stories are not often seen or heard, such as women, people living with disabilities, refugees, and those living in rural areas.

The research team recognized that, despite good intentions, power imbalances would exist among the young people, the Mastercard Foundation, and Caribou Digital. These would hinder important insights that could lead to more strategic and relevant recommendations.

From the outset, we created an environment to alleviate these power imbalances. The co-creation process involved treating the young people as experts whose stories shaped the report, emphasizing collaboration and flexibility. This approach was outlined in the Terms of Reference, which each young person signed at the beginning of the project. At the first video conferencing session, expectations were aligned and rules of engagement were set. The young people reviewed and provided feedback on the research coding framework, shaping the language and direction of the project. Video conferencing sessions to share experiences were made inclusive and accessible, with flexible post-session reflection assignments to accommodate all needs. During the report-writing phase, panelists reviewed drafts, edited their quotes, and provided feedback, culminating in a discussion on how best to present the final report.

In reflecting on our co-creation process, three core learnings emerged.

Lesson #1: Storytelling and reflection assignments yield richer data in a non-extractive way.

Rather than extract young people’s experiences through various data collection methods, we used storytelling and reflection assignments to co-create this research. From the beginning, Caribou Digital emphasized that the young people were the experts. Their stories were the foundation of the report; our role was to facilitate and listen. The online video conference format allowed the young people to build on one another’s experiences, feel validated, and connect in a non-extractive process. Post-session reflection assignments (for example, asking the young people to reflect on how digital technologies have impacted their choice and agency) allowed them to reflect on their own and in a convenient mode (audio message or email). Providing feedback on the research process, one young person shared, “The room was always accommodating of all of us who wanted to speak, and the moderators were tolerant of our views. I felt [at] home to speak/write from the reality of my experience.”

Lesson #2: Double the time and resources needed for co-creation.

Co-creation required more time, planning, and resources than initially thought. Every video conference session required thoughtful preparation to ensure a welcoming and inclusive environment — from the slide deck to the video captions. Reflection assignments and video recordings were analyzed carefully to ensure they accurately represented the young people’s experiences. Extra time was needed for the young people to review report drafts, edit quotes, and expand on their experiences. A safe estimate for others looking to use this co-creation approach would be to double the time and human resources needed.

Lesson #3: Accountability, transparency, and flexibility are key co-creation ingredients.

It was important for Caribou Digital to develop a trusted working relationship with the young people to keep them engaged throughout the research process. We were accountable when things weren’t working well and shared how the young people’s feedback was incorporated into the report. We were transparent with expectations for the research and when honorarium payments were delayed. We were flexible when the young people couldn’t provide feedback on time or attend a video conference session due to busy schedules. These practices kept the young people engaged throughout the research process. When asked to provide anonymous feedback on the research process, one participant shared, “[Caribou] was always in touch both in the Zoom session and WhatsApp to guide in case anything wasn’t right. […] We also had timely reminders for the meetings, and at no point was I caught offside or unaware of a meeting.”

Catalyzing research with co-creation

When done well, co-creation is an incredibly powerful practice that can elevate and amplify marginalized voices and improve the quality of research products. Our co-creation journey with these 20 young people was enriching and insightful, underscoring the value of trust and transparency.

By prioritizing youth voices and experiences, the 20 young people, Caribou Digital, and the Mastercard Foundation crafted a powerful report that reflects young people’s perspectives and experiences on digital technologies in Africa. One young person shared, “I feel like [co-creation] is a good approach because it lends to the authenticity of the report since these are our lived experiences […] It also makes the report relatable to fellow youth especially.”

Caribou Digital is committed to continuing this approach and conducting more co-created research. If you’re interested in participating in such initiatives or have ideas for collaboration, we invite you to connect with us at chelsea@cariboudigital.net.

Breaking down power imbalances through co-creation was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


SC Media - Identity and Access

Apple pursues dismissal of lawsuit against NSO Group

SecurityWeek reports that Apple has sought the dismissal of a lawsuit it filed against Israeli spyware maker NSO Group three years ago amid recent developments that could threaten the exposure of threat intelligence and other sensitive data required by such litigation.

SecurityWeek reports that Apple has sought the dismissal of a lawsuit it filed against Israeli spyware maker NSO Group three years ago amid recent developments that could threaten the exposure of threat intelligence and other sensitive data required by such litigation.


23andMe agrees to $30M settlement for breach lawsuit

Attackers were able to compromise 23andMe over five months beginning April 2023, enabling access to 5.5 million DNA Relatives profiles and details from 1.4 million users of the Family Tree feature, said the company in a disclosure in October.

Attackers were able to compromise 23andMe over five months beginning April 2023, enabling access to 5.5 million DNA Relatives profiles and details from 1.4 million users of the Family Tree feature, said the company in a disclosure in October.


HYPR

What Is Phishing-Resistant MFA and How Does it Work?

Phishing, despite its somewhat innocuous name, remains one of the foremost security threats facing businesses today. Improved awareness by the public and controls such as multi-factor authentication (MFA) have failed to stem the tide. The FBI Internet Crime Report puts phishing and its variants (spear phishing, smishing, vishing) as the top cybercrime for the last five years, and the

Phishing, despite its somewhat innocuous name, remains one of the foremost security threats facing businesses today. Improved awareness by the public and controls such as multi-factor authentication (MFA) have failed to stem the tide.

The FBI Internet Crime Report puts phishing and its variants (spear phishing, smishing, vishing) as the top cybercrime for the last five years, and the advent of generative AI has only added fuel to the fire. Using ChatGPT and other tools, hackers can quickly create personalized messages, in local languages, to launch widespread, highly effective phishing campaigns.

In the last six months alone, malicious emails have increased by 341%, prompting industry experts to urge organizations of all sizes to implement phishing-resistant MFA.

So, what is phishing-resistant MFA and how does it differ from traditional MFA? In this article, find phishing-resistant definitions and use cases, and learn why it’s the safest option for organizations.

What is Phishing?

Phishing is a method of attack used by malicious actors that involves deceiving users into installing malware or revealing sensitive information such as passwords, payment card and social security numbers. With this information they can take over accounts, sell the information on the dark web, steal identities and even access internal systems and networks of an organization. 

Common phishing attacks include:

Email phishing: Attackers send emails, typically with malicious links or attachments that steal sensitive data from users.  Whale and spear phishing: Similar to email phishing, whale and spear phishing are more targeted and aimed at specific, typically high-profile people in the organization (e.g. CEO or other executive).  Smishing and Vishing (voice phishing): Smishing uses SMS messages while vishing uses either a mobile or landline, combining it with social engineering attacks.   Domain phishing/impersonation: Attackers typically pretend to be well-established brands to gain users’ trust and divulge sensitive information.  Malicious attachments: Attachments contain malware that infect systems and can trigger ransomware or other attacks that steal sensitive data.  What is Multi-Factor Authentication?

Multi-factor authentication requires at least two independent factors, knowledge, or something you know (e.g., password, PIN, security question), possession, or something you have (e.g., OTP code, device), and inherence, or something you are (e.g., fingerprint or other biometric marker). 

It is different from two-factor authentication (2FA) in that 2FA requires an additional verification besides your username and password, but it doesn’t require it to be from a different authentication category like with MFA.

Phishing-Resistant MFA Overview

Phishing-resistant authentication does not use shared secrets at any point in the login process, eliminating the attacker's ability to intercept and replay access credentials and hardening the authentication process so that it cannot be compromised by even the most sophisticated phishing attacks. Passwordless MFA based on FIDO standards is considered the gold standard for phishing-resistant authentication by the OMB and other bodies.

Phishing-resistant MFA is based on public/private key cryptography and follows the guidelines published by the OMB in its M-22-09 Federal Zero Trust Strategy memorandum and the requirements for “verifier impersonation resistance” outlined by the National Institute of Standards and Technology (NIST) in SP 800-63-3.  

The Problem With Traditional MFA

There are two different problems when it comes to traditional MFA. The first is that it causes friction, both for employees who use it to access accounts and consumers who want to make their purchases quickly. 

The second problem is a security issue. Unfortunately, the most common second factor in traditional MFA is “something you have” in the form of an SMS or OTP. Like passwords, these verification methods are highly vulnerable to phishing as well as MitM (Man-in-the-Middle) attacks. In order for MFA to resist phishing, it cannot rely on the use of SMS, OTPs, or identification attempts through voice calls or interceptable push notifications.

Why Phishing-Resistant MFA is the Gold Standard

A better solution is FIDO or PKI-based passwordless authentication. These phishing-resistant MFA methods remove the vulnerabilities that undermine traditional MFA, including any use of a “something you know”’ factor as these are the target of the majority of phishing attacks.

Phishing-resistant MFA does not use any of these weaker authentication factors. It uses a strong possession factor in the form of a private cryptographic key (embedded at the hardware level in a user-owned device) and strong user inherence factors such as touch or facial recognition. Equally important, the backend authentication process does not require or store a shared secret.

Since 2022, CISA, the Cybersecurity and Infrastructure Security Agency, has strongly recommended that all organizations implement phishing-resistant MFA based on FIDO standards. This is considered the gold standard for phishing-resistant authentication by NIST (800-63B), the FFIEC, the OMB and other cybersecurity statutes.

Phishing-resistant MFA flow

Breaking Down Phishing-Resistant Multi-Factor Authentication

Phishing-resistant multi-factor authentication defends against attackers who are looking to bypass authentication controls. This more advanced level of security involves various technologies and processes, which can be implemented in a number of ways.

Strong Authentication

A hallmark of phishing-resistant MFA is strong authentication that provides a robust defense against phishing and other targeted attacks. A somewhat broad concept, it involves using secure cryptographic protocols and two or more authenticating factors that include proof of device possession as well as user biometrics.

Passkeys

Passkeys replace passwords and secrets with cryptographic key pairs and on-device biometrics for faster, easier, and more secure sign-ins to websites and apps. Unlike passwords, passkeys are always strong and phishing-resistant. Passkeys can be either synced or device-bound. Synced passkeys are the standard passkeys offered by Apple, Microsoft, Google and others.

The private key is securely stored in a vault, such as the OS keychain or a password manager, and can be synced between devices. Device-bound passkeys, by contrast, are stored on a specific hardware device and cannot be shared with other devices.

Security Keys 

Security keys are physical devices that store cryptographic keys, but they can be either hardware or software-based. Software-based keys might be stored and integrated into mobile devices, for example, whereas hardware keys are physical devices that store cryptographic keys. However, this method has limitations as it can easily be lost or stolen and challenging to recover.

Biometric Authentication

Biometric authentication focuses on biological methods of identification such as fingerprints or face recognition to verify identity for the inherence (e.g. “something you are”) authentication factor. It is often integrated into devices such as mobile phones or computers. 

Adaptive Authentication 

While not technically an element of phishing-resistant MFA, adaptive authentication enforces verification of identity based on the user’s context and risk. For example, it would have a different process based on the user’s location (e.g. home or work) and device (e.g. phone or work computer). 

The Cost of Phishing Attacks

Phishing plays a role in various types of attacks. According to the 2023 Verizon Data Breach Investigations Report, phishing accounted for 44% of social engineering breaches, with the median amount stolen from Business Email Compromise alone averaging $50,000. It’s also a key initial attack vector in credential stealing, allowing hackers to initiate fraudulent transactions, deliver malware including infostealers and ransomware and gain an authenticated foothold from which they can move laterally within the system.

The Cost of a Data Breach 2024 report by IBM estimates that the average cost of a data breach is $4.88 million, an increase of 10% from the year before. Unfortunately, the go-to mitigation to prevent phishing, namely adding traditional MFA, has proven inadequate. Sometimes they are even used as part of the attack itself. 

 

Most multi-factor authentication solutions feature a password as one of the verification factors. The additional authentication factor generally is a one-time password (OTP) sent by voice, SMS, or email, or a push notification via an authenticator app that the user must accept.

Today, automated phishing kits that can circumvent these methods are readily available to hackers. Cybersecurity experts claim that over 90% of all multi-factor authentication is phishable. Due to these MFA vulnerabilities and the threat posed by phishing, the Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Government Office of Management and Budget (OMB), as mentioned above, have specifically called for phishing-resistant MFA. 

Why Organizations Need to Prioritize Phish-Resistant Authentication

While the need for phishing-resistant MFA has been apparent for some time, and was a key driver for establishing the FIDO Alliance, the generative AI trend and ChatGPT in particular has kicked this into overdrive. Cybercriminals now have the ability to send massive numbers of highly targeted phishing attacks using dark web ChatGPT counterparts such as FraudGPT and WormGPT.

According to Slashnext’s State of Phishing 2024 Mid-Year Assessment, there has been a 4151% increase in malicious emails since the advent of ChatGPT in late 2022. 

As phishing attacks have increased, so has the incidence of account takeover (ATO),  leading to a number of potential consequences for targeted organizations, including supply chain fraud, data theft and the installation of ransomware and other malware. Attackers can also use the hijacked account of one user to escalate attacks within the organization by sending malicious emails from a trusted user.

Multi-factor authentication has proven ineffective against modern phishing campaigns, which are able to phish both the initial login credentials and the second factor. For example, a phishing message might direct the victim to a proxy website while the attacker acts as a man-in-the-middle to steal both the password and OTP code.

This is only one of many tactics cybercriminals use to compromise multi-factor authentication that uses OTPs or SMS. Others include running legitimate versions of websites on their own servers, using robocalls to convince users to hand over codes and SIM-swapping, so messages are sent to an attacker’s phone.

The skyrocketing number of phishing attacks in general, accompanied by sophisticated tactics that can circumvent common authentication checks, means that phishing-resistant MFA is no longer optional. Instead, it is the only choice to keep employees and organizations safe from the vast majority of phishing threats.

How to Choose a Phishing-Resistant MFA Solution

When considering a phishing-resistant MFA solution, you’ll want to ask about its ability to completely remove shared secrets (passwords, OTPs), its support for multiple devices (e.g. desktop and mobile), and its ability to reduce friction for the user experience.

For example, does it secure authentication for remote workers and work offline? Is it intuitive and easy for new users to learn? You’ll also want to verify how long it takes to deploy across your organization and if it integrates with major identity providers (IdPs). Finally, you’ll want to make sure its FIDO Certified and achieves compliance with Zero Trust architecture and regulatory obligations. 

Considerations When Implementing Multi-Factor Authentication

Implementing multi-factor authentication within your organization involves a few different factors to evaluate:

Security strength: Although MFA typically protects against brute force attacks, some types of authentication are subject to phishing attacks. To ensure the highest level of security, however, you’ll want to consider phishing-resistant MFA that is FIDO-compliant. Cost: You’ll need to evaluate the costs of the solution, which include not only setup and user training but ongoing maintenance costs. Keep in mind that while some solutions might cost more, they may also deliver better security and be easier for your team to implement. Some solutions may also impact productivity at the time of deployment, so that might be a consideration. Flexibility: Users want a number of different options available for MFA. Check that your solutions offer different methods of authentication, such as verification via a mobile application or hardware keys to adjust to the needs of different users and environments. Scalability: Can the solution adapt to the changing needs of your organization? Can it handle a workforce that is remote? Does it offer MFA for networks, servers, and cloud infrastructure? 

Learn how to evaluate passwordless security solutions

HYPR's Phishing-Resistant MFA Solution

It’s clear that phishing-resistant MFA is critical, but what does it look like in practice? HYPR’s Passwordless MFA solution is based on the FIDO standards and provides phishing-resistant authentication from desktop through to cloud applications, no matter where your workforce is located.

HYPR leverages public key cryptography to allow for secure authentication that fully eliminates the use of shared secrets between parties. Just as importantly, the HYPR platform is easy to deploy and makes logins fast and easy for the user. Complicated sign-in processes are one of the biggest reasons that people take shortcuts or use unsafe practices that criminals exploit. 

To learn more about passwordless security and phishing-resistant MFA, read our Passwordless 101 guide.

FAQs

What is the difference between passwordless and phishing resistant MFA?
Not all passwordless MFA is phishing-resistant or indeed really passwordless. OTP codes, after all, are a form of password. A solution that uses any kind of shared secret can still be compromised by phishing, man-in-the-middle and other attacks that target credentials. Phishing-resistant MFA, on the other hand, ensures that even if users are targeted with phishing attacks, there are no credentials available to steal and their authentication remains secure.

What are the benefits of phishing resistant MFA?
Phishing-resistant MFA delivers a number of benefits to the user. First, it delivers a friendly user experience that eliminates the friction involved in the traditional MFA process. Second, it provides a higher level of security than two-factor authentication or traditional multi-factor authentication. 

Can phishing bypass 2FA?
Yes, phishing can bypass 2FA using a number of different methods such as man-in-the-middle attacks, password resets and social engineering attacks. This is because most 2FA verification methods involve one-time passwords (OTP) via email or SMS, which can be easily intercepted.

Why are passkeys phishing resistant?
Passkeys are phishing resistant as they are based on FIDO standards which were designed to resist phishing as well as some other forms of attack. They consist of cryptographic key pairs, which are registered to a specific authenticating service, ensuring that the passkey only works with the exact domain name of the service. There are no passwords or shared credentials to phish and a spoofed site cannot use them.

Editor's Note: This blog was originally published May 2022 and has been completely revamped and updated for accuracy and comprehensiveness.


KuppingerCole

Evidian Orbion IDaaS solution

by Martin Kuppinger This KuppingerCole Executive View report examines Evidian Orbion, the next-generation IDaaS solution from Evidian. Orbion provides a comprehensive, integrated approach to Identity as a Service (IDaaS), addressing all major areas of Identity and Access Management (IAM) beyond just the workforce. This report includes a technical review of the solution Evidian Orbion.

by Martin Kuppinger

This KuppingerCole Executive View report examines Evidian Orbion, the next-generation IDaaS solution from Evidian. Orbion provides a comprehensive, integrated approach to Identity as a Service (IDaaS), addressing all major areas of Identity and Access Management (IAM) beyond just the workforce. This report includes a technical review of the solution Evidian Orbion.

Microsoft Entra ID Governance

by Martin Kuppinger This KuppingerCole Executive View report looks at Microsoft Entra ID Governance, the IGA (Identity Governance & Administration) solution within the Microsoft Entra portfolio. Microsoft Entra ID Governance is delivered as IDaaS (Identity as a Service). It allows simple and fast deployment of IGA capabilities with a good set of capabilities serving the requirements of a wide

by Martin Kuppinger

This KuppingerCole Executive View report looks at Microsoft Entra ID Governance, the IGA (Identity Governance & Administration) solution within the Microsoft Entra portfolio. Microsoft Entra ID Governance is delivered as IDaaS (Identity as a Service). It allows simple and fast deployment of IGA capabilities with a good set of capabilities serving the requirements of a wide range of customer use cases.

Thales Group

International ID Day: Thales stands up for a legal and trusted identity for everyone.

International ID Day: Thales stands up for a legal and trusted identity for everyone. prezly Mon, 09/16/2024 - 08:30 On the International Identity Day ‘ID Day’, Thales reaffirms its dedication to supporting global identity initiatives and driving technological advancements that foster inclusion, security, and trust. Around the world, more than 850 million individuals do not hav
International ID Day: Thales stands up for a legal and trusted identity for everyone. prezly Mon, 09/16/2024 - 08:30 On the International Identity Day ‘ID Day’, Thales reaffirms its dedication to supporting global identity initiatives and driving technological advancements that foster inclusion, security, and trust. Around the world, more than 850 million individuals do not have a legal identity, preventing them from claiming their rights and accessing fundamental citizen services e.g. health, education, employment. Thales has been working for more than thirty years to make trustworthy identities a reality for everyone, employing cutting-edge and responsible technology for biometrics and digital ID solutions.

 

On September 16th, Thales celebrates the International Identity Day (ID Day), a symbolic day to highlight the United Nations' Sustainable Development Goal 16.9 to provide a legal identity for all. ID Day is dedicated to raising awareness about the importance of legal identity as a fundamental human right and a key enabler of inclusive social and economic development. Thales supports global efforts to guarantee every individual has access to a secure and trusted identity.

Thales, a leading provider of secure physical and biometrics identification, has been actively involved in projects that are in line with the UN's Sustainable Development Goal 16.9, including birth registration, by 2030. ​ According to the 2023 estimates from the World Bank's Identification for Development (ID4D) Initiative, over 850 million people globally lack an official ID. Thanks to global mobilization, the overall situation has improved since 2020 when 1 billion individuals were missing a legal ID.

ID Day highlights the significance of having a legal identity and the positive impact it has on individuals and communities worldwide. A legal identity grants access to essential services, fosters social inclusion, and facilitates participation in the global economy. That is why Thales, recognised as the number one digital identity player by Juniper Research (in its “2024 Juniper Research Competitor Leaderboard”) stands for a future in which everyone could benefit from and confirm their trusted identity, leaving no one behind.

"ID Day serves as a crucial reminder that identity is a fundamental human right. In today's interconnected world, having a secure and trusted legal ID is essential for accessing services, exercising rights, and fostering economic development. At Thales, we are committed to driving innovation in biometric and digital identity solutions, ensuring that every individual can claim their rightful place in society. Our goal is to support with our solutions a world where identity is secure, inclusive, and universally recognized" said Youzec Kurp, VP Identity and Biometrics Solutions at Thales Group

A secure identity is more than just a document; it is a gateway to opportunities and a cornerstone of trust in the digital age. Thales responsible biometrics (cf TrUE Biometrics1) and digital ID solutions are designed to meet the highest security standards, protecting individuals' personal data while facilitating seamless access to services as well as mobility.

1 TrUE Biometrics stands for Transparent, Understandable and finally Ethical. For years, Thales has developed highly secure solutions and biometrics has proved its full capacity to offer both security and convenience. While the technology serves a wide range of new needs triggered by our societies' digital transformation, Thales also supports initiatives that raise awareness of the benefits and risks of adopting biometric identification technologies. Thales is a reliable and responsible partner since it provides transparent, understandable, and ethical biometrics.

Contacts Vanessa Viala - Digital Identity & Security Press Officer 16 Sep 2024 Digital Identity and Security Government Type Press release Structure Digital Identity and Security On September 16th, Thales celebrates the International Identity Day (ID Day), a symbolic day to highlight the United Nations' Sustainable Development Goal 16.9 to provide a legal identity for all. ID Day is dedicated to raising awareness about the importance of legal identity as a fundamental human right and a key enabler of inclusive social and economic development. Thales supports global efforts to guarantee every individual has access to a secure and trusted identity. prezly_689033_thumbnail.jpg Hide from search engines Off Prezly ID 689033 Prezly UUID 10ce111b-7a73-4575-b509-3de6a022f3f3 Prezly url https://thales-group.prezly.com/international-id-day-thales-stands-up-for-a-legal-and-trusted-identity-for-everyone Mon, 09/16/2024 - 10:30 Don’t overwrite with Prezly data Off

Sunday, 15. September 2024

KuppingerCole

Beyond ChatGPT: AI Use Cases for Cybersecurity

How can artificial intelligence be used in cybersecurity? Matthias and Alexei asked ChatGPT exactly this question and it came up with quite a list of use cases. They go through this list and discuss it. They explore the different forms of AI aside from generative AI, such as non-generative AI and traditional machine learning. They highlight the limitations and risks associated with large language

How can artificial intelligence be used in cybersecurity? Matthias and Alexei asked ChatGPT exactly this question and it came up with quite a list of use cases. They go through this list and discuss it. They explore the different forms of AI aside from generative AI, such as non-generative AI and traditional machine learning. They highlight the limitations and risks associated with large language models like GPTs and the need for more sustainable and efficient AI solutions.

The conversation covers various AI use cases in cybersecurity, including threat detection, behavioral analytics, cloud security monitoring, and automated incident response. They emphasize the importance of human involvement and decision-making in AI-driven cybersecurity solutions.

Here's ChatGPT's list of AI use cases for cybersecurity:

AI for Threat Detection: AI analyzes large datasets to identify anomalies or suspicious activities that signal potential cyber threats. Behavioral Analytics: AI tracks user behavior to detect abnormal patterns that may indicate compromised credentials or insider threats. Cloud Security Monitoring: AI monitors cloud infrastructure, detecting security misconfigurations and policy violations to ensure compliance. Automated Incident Response: AI helps automate responses to cyber incidents, reducing response time and mitigating damage. Malware Detection: AI-driven solutions recognize evolving malware signatures and flag zero-day attacks through advanced pattern recognition. Phishing Detection: AI analyzes communication patterns, spotting phishing emails or fake websites before users fall victim. Vulnerability Management: AI identifies system vulnerabilities, predicts which flaws are most likely to be exploited, and suggests patch prioritization. AI-Driven Penetration Testing: AI automates and enhances pen-testing by simulating potential cyberattacks and finding weaknesses in a network. Anomaly Detection in Network Traffic: AI inspects network traffic for unusual patterns, preventing attacks like Distributed Denial of Service (DDoS). Cybersecurity Training Simulations: AI-powered platforms create dynamic, realistic simulations for training cybersecurity teams, preparing them for real-world scenarios. Threat Intelligence: NLP-based AI interprets textual data like threat reports, social media, and news to assess emerging risks. Predictive Risk Assessment: AI assesses and predicts potential future security risks by evaluating system vulnerabilities and attack likelihood.


DHIWay

Decentralized Identity: It’s Not What You Think

In an increasingly digital world, proving who we are has never been more critical or misunderstood. The conversation around decentralized identity often suggests that it will replace the systems we’ve relied on for so long, tearing down the old to make way for the new. But that’s not the reality. These identity models aren’t adversaries […] The post Decentralized Identity: It’s Not What You Thin

In an increasingly digital world, proving who we are has never been more critical or misunderstood. The conversation around decentralized identity often suggests that it will replace the systems we’ve relied on for so long, tearing down the old to make way for the new. But that’s not the reality. These identity models aren’t adversaries locked in a battle for dominance; they are complementary forces that, when combined, can create a more secure, flexible, and empowering future for us all.

Think about it: our identity isn’t just a name, an ID card, or a social media profile. It’s a complex web of credentials, reputations, and relationships rooted in something deeply personal and sovereign—the name given to us at birth. This idea of identity is naturally decentralized. Yet, in today’s digital world, we are forced to rely on borrowed identifiers—like email addresses, mobile numbers, and social media accounts—that leave us vulnerable and powerless.

What if we could reclaim that sense of sovereignty in the digital realm? Imagine having a digital identity as uniquely ours as our name—one that we fully own and control, without ever compromising our privacy or security.

To bring this vision to life, we must rethink digital identity—not as a choice between centralized or decentralized systems, but as a fusion of their strengths. When these two approaches unite, they create a powerful framework of trust that offers more security, flexibility, and empowerment than either could achieve alone.

The Nature of Identity: Rooted in Sovereignty

To understand the future of digital identity, we need to start with a simple but powerful truth: our identities are inherently sovereign. From the moment we are born, our identities begin with our names—given to or chosen for us, not issued by any central authority. These names belong to us, and only us. Over time, they become associated with a rich tapestry of experiences, accomplishments, and relationships that form our reputations.

In the physical world, we build our identities by linking credentials to our names—birth certificates from governments, diplomas from universities, and membership cards from professional organizations. Each of these credentials contributes to the reputation of our names, like threads weaving together the fabric of who we are. No single entity controls all these threads; they come from diverse sources, adding depth and nuance to our identities.

But in the digital realm, this natural decentralization begins to unravel. Online, our identities are often reduced to borrowed credentials—an email address from a tech company, a social media profile, or a phone number managed by a telecom provider. Third parties control these digital identifiers, and don’t truly belong to us. They can be revoked, altered, or exploited without our consent.

What’s more, we lack control over our data. In the current model, we are compelled to hand over vast amounts of personal information to third parties for authentication and authorization. This means our data—our actions, preferences, and relationships—ends up in centralized databases that are often opaque and vulnerable. We have little say over how this data is collected, used, shared, or sold, making us passive participants in our digital lives.

This brings us to a critical realization: our current digital identities do not reflect the sovereignty and flexibility of our real-world selves. Instead, they are fragmented and vulnerable, exposed to misuse and exploitation, and ultimately subject to the control of entities whose interests may not align with ours.

But what if our digital identities could be as sovereign and flexible as the names we were given at birth? What if we could build digital reputations similarly—by linking credentials to identities we fully own and control? This is where the concept of cryptographic identifiers—a new digital foundation—comes into play.

The Core of Digital Identity: A Key Pair as Our Digital Name

Public key cryptography, a cornerstone of digital security for decades, lays the groundwork for a digital identity we truly own and manage ourselves. It revolves around a pair of cryptographic keys: a private key known only to us and a public key, which we can share with others. This key pair becomes the digital root of trust—an anchor for our online identity that remains under our control alone.

Think of the private key as our personal signature, kept secret and secure, while the public key acts like our digital name—something we can share openly and widely. Together, they create a powerful method to authenticate who we are online, without relying on any third-party provider. Just like the names given to us at birth, our digital key pair is unique and completely within our control.

But how does a key pair build trust? Here’s where it gets interesting.  Just as our real-world name gains recognition and credibility through our experiences, accomplishments, and relationships, our digital identity earns its reputation through credentials tied to our key pair. These credentials—whether issued by a government, a university, or a professional organization—are cryptographically signed and secured.

What makes this powerful is that these credentials are verifiable at any time by anyone who needs to confirm our identity, qualifications, or achievements—without ever having to return to the original issuer. This instant, trust-based verification protects our privacy. It empowers us to build and present our digital reputation with the same confidence and autonomy we enjoy in the physical world.

Building Our Digital Reputation: The Key Pair in Action

Think of our digital key pair as a blank canvas, ready to be filled with the credentials that define us. Over time, we can attach verifiable credentials to this key pair—our digital driver’s license, a degree from our university, or proof of employment from our company. Each of these credentials contributes to our digital reputation, enabling us to build trust without giving up control.

Imagine needing to prove our professional qualifications to a potential employer. Instead of submitting physical documents or scans, we present a set of digital credentials tied to our key pair. The employer can instantly verify these credentials, thanks to cryptographic proofs that confirm the appropriate authorities issued them. No lengthy checks or third-party databases are required—just immediate, secure trust.

This concept extends beyond professional credentials. Suppose we need to access an age-restricted service online. Rather than disclosing our full name, date of birth, and address, we can provide a signed cryptographic proof that simply confirms we meet the age requirement without revealing any other personal information. The service provider trusts this proof because it is tied to our key pair and backed by verifiable credentials issued by trusted entities.

Anchoring Identity with Multiple Key Pairs: Flexibility and Context

The power of a decentralized digital identity doesn’t stop with a single key pair. We can have multiple key pairs for different contexts—each serving a specific purpose or representing a unique aspect of our digital selves. For example, one key pair might be used for professional credentials, while another could be designated for personal interactions or healthcare records. This flexibility allows us to maintain privacy and security across various domains, ensuring that only relevant information is shared with the appropriate parties.

The World Wide Web Consortium (W3C) Decentralized Identifier (DID) standard makes adopting this approach feasible across different systems and platforms. DIDs enable us to create and manage multiple digital identities, each anchored by its cryptographic key pair, in a way that is interoperable and recognized by various services and organizations worldwide.

Owning Our Digital Identity: A New Paradigm

We reclaim sovereignty over our online lives by anchoring our digital identity to a key pair that only we control. We decide which credentials to share, with whom, and for how long. This approach fundamentally shifts the power dynamics, allowing us to build and manage our digital reputation just as we do in the real world—by accumulating trusted credentials over time.

This doesn’t mean eliminating centralized systems; instead, it integrates them into a more flexible, user-centric model. Governments, universities, banks, and other institutions continue to issue credentials, but now they do so in a way that respects our control over our identities. This isn’t about replacing one system with another; it’s about creating a bridge that combines the best of both worlds, where centralized trust meets decentralized control.

A Future Anchored by Sovereignty and Flexibility

The promise of a truly self-sovereign digital identity is no longer a distant dream. By combining the strengths of cryptographic technology and decentralized frameworks like DIDs, we can create a new digital identity paradigm that respects our privacy, protects our data, and places control back in our hands. This isn’t about tearing down existing systems; it’s about enhancing them, building bridges, and creating a digital future where our identities are secure, trusted, and uniquely ours.

With cryptographic key pairs and the W3C DID standard as the anchors of this new approach, we move towards a future where our digital identities are as secure, private, and flexible as our real-world selves. The journey starts now, with each of us reclaiming the power to own and manage our digital selves, navigating the digital realm with confidence and autonomy.

The post Decentralized Identity: It’s Not What You Think appeared first on Dhiway.


PROPERTY TOKENIZATION – REVISITING THE WHY BEHIND DEMATERIALISATION

The overall goal is to use technology to address India’s property-related legal and economic challenges. The Indian real estate market is a unique one, governed by countless laws, regulations, and state-level amendments which control, and prohibit, the purchase of land by non-domiciled Indian residents. As a rough rule of thumb, foreign nationals who do not […] The post PROPERTY TOKENIZATION –&n
India’s real estate market is complex, with strict regulations on property ownership. Land disputes are a major issue, accounting for 66% of civil cases and causing significant economic drain. Poor record-keeping and outdated land titles contribute to these disputes. The document proposes using blockchain technology and Verifiable Credentials (VCs) to create a more efficient, transparent, and secure system for managing land records and resolving disputes. Real estate tokenization is emerging as a solution, allowing fractional ownership and increased liquidity. A partnership between Rooba.Finance and Dhiway aims to combine asset tokenization and blockchain technology to innovate in this space.

The overall goal is to use technology to address India’s property-related legal and economic challenges.

The Indian real estate market is a unique one, governed by countless laws, regulations, and state-level amendments which control, and prohibit, the purchase of land by non-domiciled Indian residents. As a rough rule of thumb, foreign nationals who do not reside in India cannot have property registered in their names. PIOs and NRIs are restricted from buying agricultural, plantation, farm and other such land, though they are not prohibited from purchasing, selling or inheriting residential or commercial land save for one caveat – some states prohibit non-domiciled individuals from purchasing land of any type. 

An indicative list of central laws that govern the purchase of land follows:

Transfer of Property Act, 1882 Registration Act, 1908 Indian Stamp Act, 1899 Real Estate (Regulation and Development) Act, 2016 Benami Transactions (Prohibition) Act, 1988 Foreign Exchange Management Act (FEMA), 1999

For NRIs to purchase residential property, the following documents are necessary:

Passport and/or OCI Card PAN Card PoA registered for the specific transaction, if the NRI is not physically available for registration.

As regards agricultural land, all NRIs and PIOs are prohibited from purchasing it, though there is no bar on inheritance. However, in many states, even resident Indian citizens face restrictions relating to the purchase of land, or conversion of agricultural land to N.A. land by mutation. 

The long and short of it is, that India makes it hard to buy real estate, makes you undergo stringent documentation and has, for all intents and purposes, a set of federal and state level laws in place to adapt to its diversity. 

Despite this extensive legal system in place, an estimated 7.7 million people in India are affected by conflict over 2.5 million hectares of land, threatening investments worth more than Rs 14 lakh crore. Since land is central to India’s developmental trajectory, finding a solution to land conflict is a crucial policy challenge for the Indian government. Land disputes account for the largest set of cases in Indian courts – 25 percent of all cases decided by the Supreme Court involved land disputes, and surveys suggest that 66 per cent of all civil cases in India are related to land or property disputes. The average pendency of land acquisition cases, from  creation to resolution in the Supreme Court, is 20 years on average. Some reports indicate that more than two-thirds of litigation pertains to property. 

Data around Supreme Court (SC) cases is alarming. Cases pertaining to property  that manage  to reach the Apex Court at ‘Special Leave Petition’ or ‘Leave to Appeal’ stages are a mixed bag, ranging from land acquisition to conventional title disputes. To put it into perspective, the pecuniary jurisdictions of most states’ district courts have been raised to unlimited to ensure that High Courts do not get clogged by litigation. Up until 2015, litigants could approach High Courts directly to file property cases concerning properties over a certain value. Now, commercial disputes must all go to district courts at first, and require mandatory mediation in order to prevent lis (legal dispute) from being joined in the first place. Despite this, there is an alarming rate of litigation prevalent across all asset-value classes. This trigger-happy litigious mentality has ramifications beyond protracted pendency of cases. Individuals from lower socio-economic strata are unable to receive justice due to pendency in courts. Since they are unable to access quality legal advice, they often spend as long as 20 years or more litigating, generally on questions of title and devolvement of title. In principle, the Supreme Court must only deal with disputes concerning questions of law that have not been settled or require revisiting or interpretation. Broadly speaking, disputes with the highest incidence of percolating to the SC are Land Acquisition cases. By and large, as indicated by the figures above, 66% of all pending courts cases comprise property-related disputes, which can be bifurcated into private and against the state (land acquisition). Private disputes (between private parties, juristic or natural), can be further divided into those involving the title (competing title interests or encroachment) and those relating to devolvement (wills).

 

Cases which are not mediated or settled result in litigation, which has two economic outcomes. The first is that litigants lose money in hefty legal fees and the other is that the economy is detrimentally affected due to assets being locked in encumbrance. Without proposing some utopic litigation–free universe, what all can technology solve in such a status quo?

By 2040, real estate market will grow to Rs. 65,000 crore (US$ 9.30 billion) from Rs. 12,000 crore (US$ 1.72 billion) in 2019 and contribute 13% to the country’s GDP by 2025. Retail, hospitality, and commercial real estate are also growing significantly, providing the much-needed infrastructure for India’s growing needs. The problems at hand are economic drain to the people, a judicial strain to the infrastructure and is resulting in a lack of access to justice. 

The solution? Verifiable provenance through digital records. Over the last decade, concerted efforts have been made to shift towards building and deploying Digital Public Infrastructure to solve the problems pertaining to data within India. Currently, the lack of trustworthy records accounts for a significant amount of litigation as well as the inability of government schemes to function. There are significant errors and discrepancies in the maintenance logs of land records. In a study conducted in Rajasthan, in 24 percent cases, the difference between the area on record and the area measured was more than 20 percent. To compound this, land titles are often considered presumptive, meaning that the person currently occupying the land is assumed to be its owner. The same study revealed that the state ceased maintaining records of land possession in 1972, and there is no data on land possession at the tehsil level. As a result, title records are frequently outdated; the registered owner might have died or sold the property without updating the records, making it challenging to determine current ownership. 

Private disputes pertaining to joint ownership also take root in poor record-keeping. It gets particularly tricky when succession cases are instituted well into the future, sans any verifiable records. In India, devolvement follows religious or custom-based inheritance by default, unless expressly revoked by a will, thereby choosing testamentary succession (a quagmire of litigation in itself). All this has a detrimental impact on the ease of doing business rankings, specifically in respect of contract enforcement and property registration. India is currently ranked 163rd and 166th, respectively, on the abovementioned fronts. Both these factors, once again, are greatly affected by India’s persistent problem: an overwhelming number of land litigations.

In the early 90s, humanity was at the dawn of personal computing and the era of the internet. Juxtaposed to this groundbreaking advancement, India witnessed one of the largest scale financial frauds ever, the Harshad Mehta Scam. In this backdrop, the Securities and Exchange Board of India (SEBI) identified authenticity of securities as a paramount concern, and a hole to be plugged. By 1996, demat was mandated across public securities markets, ushering in an era of depositories, clearing corporations, registrar-cum-transfer agents and stock exchanges. SEBI used regulated intermediaries to ensure the safety and security of individuals participating in India’s securities markets. 

Till date, some sectors of financial markets, such as private markets, have been left largely untouched by digitisation or dematerialisation. This has resulted in information asymmetry and data silos, culminating in opaque markets, inefficiencies in transactability and a lack of trust. At this juncture, we need to look towards innovative technology solutions to improve the sourcing, sharing and verification of data which assists the public in making financial decisions. At present, in 2024, we are witnessing increasing use cases of DLT and AI, and it seems only fitting that as we consider the evolving avatar of the internet, we must adopt and adapt or risk being mired in legacy market inefficiencies. In recent years, real estate tokenization has emerged as an unconventional investment option with advantages for both issuers and investors. The real estate sector now makes up about 40% of the digital securities market, amounting to approximately $200 million. Real estate tokenization typically turns a property’s value into a token that can be transferred and owned digitally by storing it on a blockchain. These fractional shares of ownership in the real estate are represented by these divisible tokens. A reliable database is necessary for private markets to become more liquid. Instead of being centralised, we think that this new database will be distributed and owner-controlled.

So, how does the Finternet Project and its contributors aim to solve this population-scale problem of verifiable data? 

The vision of the Finternet is to build a set of rails for a user-centric ecosystem that unifies various fractured and siloed ecosystems using universal principles translated through technology. In the narrow compass of real estate, availability of authenticated data relating to property will unlock the hidden financial potential of a traditionally illiquid asset, remedying a major cause of litigation in India. 

Verifiable Credentials

Finternet can revolutionise the administration and evidence process for dispute-resolution by integrating advanced digital tools and decentralised technologies. Through blockchain, it ensures that records and evidence are digitised and immutable, providing a reliable and tamper-proof source of truth. Verifiable Credentials (VCs) allow for instant authentication and verification of evidence, streamlining the process and ensuring authenticity. Real-time data access and transparency are enhanced, allowing for quicker decision-making. 

VCs are digital certificates that can be used to prove the authenticity of information regarding an individual, organization or an asset. These credentials are stored securely and can be presented and verified in a decentralised manner, without the need for intermediaries. VCs are particularly useful in scenarios where trustworthiness is a priority, like in the case of property disputes.

In the context of property, verifiable credentials can be employed to:

Authenticate Property Ownership: VCs can be issued by government authorities or trusted entities to certify ownership of a property. These credentials can be cryptographically verified by any party, ensuring that the ownership claim is legitimate and reducing the likelihood of fraudulent claims. Streamline Property Transfers: During property transfers, VCs can be used to verify the identities of the parties involved, as well as the authenticity of the property title. This can significantly reduce the time and cost associated with the transfer process, as it eliminates the need for extensive paperwork and third-party verification. Resolve Title Disputes: In cases where there is a dispute over property ownership, VCs can serve as tamper-proof evidence of ownership history. The use of VCs can expedite the resolution process by providing courts or arbitration bodies with a clear, verifiable record of ownership, thus reducing the duration and complexity of litigation. Improve Transactability: By using VCs, all parties involved in a property transaction can have access to verified and up-to-date information. This transparency helps in faster business decisions such as loans-against-property, home loans, credit decisions, etc.  Integrate with Smart Contracts: VCs can be integrated with smart contracts to automate the execution of agreements based on verified conditions. For instance, a smart contract could automatically release payment upon the verification of a property transfer credential, ensuring that both parties fulfill their obligations.

By leveraging VCs within the property sector, India can move towards a more efficient, transparent and secure system of managing land records and resolving disputes. This technology has the potential to reduce the burden on the judiciary, minimise economic losses due to encumbered assets, and enhance the overall ease of doing business in the country.

Conclusion

The Indian real estate market faces significant challenges due to complex regulations, widespread land disputes, and outdated record-keeping systems. These issues result in economic inefficiencies, overburdened courts, and barriers to investment and development.

However, emerging technologies offer promising solutions to these long-standing problems. The integration of blockchain technology, Verifiable Credentials, and asset tokenization has the potential to revolutionize property management and transactions in India. By creating a more transparent, secure, and efficient system for recording and verifying property ownership, these innovations could:

Reduce the number of property-related disputes Streamline property transfers and reduce associated costs Improve access to justice by providing clear, verifiable records Enhance the liquidity of real estate assets through tokenization Attract more investment to the real estate sector

The path forward involves continued development of these technologies, their integration into existing legal and administrative frameworks, and widespread adoption by stakeholders in the real estate sector. While challenges remain, the potential benefits of this technological revolution in property management are substantial and could transform India’s real estate landscape in the coming years.

Dhiway and Rooba alliance

Rooba.Finance and Dhiway are strategically collaborating to harness their respective strengths in asset tokenization and blockchain technology, driving innovation in the financial and property sectors. Rooba.Finance, with its expertise in asset tokenization, is pioneering the creation of digital representations of real-world assets, allowing for fractional ownership and enhanced liquidity in the market. Dhiway, a leader in blockchain-based infrastructure, provides the robust, secure, and transparent technology backbone necessary to support these digital assets. By integrating Dhiway’s advanced blockchain solutions, Rooba.Finance ensures that each tokenized asset is securely documented, traceable, and compliant with regulatory standards. This partnership not only facilitates the creation of new investment opportunities but also advances the secure and efficient management of digital assets, paving the way for a more decentralized and democratized financial ecosystem.

The post PROPERTY TOKENIZATION – REVISITING THE WHY BEHIND DEMATERIALISATION appeared first on Dhiway.

Friday, 13. September 2024

Anonym

Aries VCX: Another Proof Point for Anonyome’s Commitment to Decentralized Identity 

For nearly two years, Anonyome Labs has co-maintained an open source project from Hyperledger called Aries-VCX. VCX is an important decentralized identity (DI) community project, which provides the backbone for other DI software products, such as our own Sudo Platform DI Edge Agent SDK for native mobile applications. In this article, we will explore the […] The post Aries VCX: Another Proof Poin

For nearly two years, Anonyome Labs has co-maintained an open source project from Hyperledger called Aries-VCX. VCX is an important decentralized identity (DI) community project, which provides the backbone for other DI software products, such as our own Sudo Platform DI Edge Agent SDK for native mobile applications. In this article, we will explore the details of this project, Anonyome’s contributions, and what’s next for this exciting project. 

What is Aries-VCX? 

Aries-VCX is a project under the Hyperledger Aries group. This group strives to provide complete toolkits for DI solutions and digital trust, including the ability to issue, store and present verifiable credentials with maximum privacy preservation, and establish confidential, ongoing communication channels for rich interactions. VCX sits alongside other popular projects such as Aries Cloud Agent Python (ACA-Py) and Credo (formerly Aries Framework JavaScript under Hyperledger). 

While these projects pursue a similar goal, they complement each other nicely. VCX is written primarily in Rust and targets both cloud and mobile native consumers. By comparison, Credo targets cloud and mobile JavaScript consumers, and ACA-Py targets only cloud consumers. Support for native mobile consumers was an essential goal when building the technology stack for Anonyome’s Edge Agent SDK and all other Sudo Platform SDKs, because providing native SDKs gives our consumers flexibility when integrating into their mobile applications and doesn’t limit them to JavaScript or React Native based environments. 

Further, VCX differs from other Aries projects in that it has historically focused on providing lower-level building blocks for DI SDKs and applications rather than batteries-included DI frameworks for consumers to pick up. We fully appreciate the low-level components because they give us the flexibility to design Anonyome’s Edge Agent SDK with an optimised internal engine and easy-to-use APIs that are in line with our Sudo Platform standards. However, VCX’s lower-level approach also presents a higher barrier to entry for other SDKs and applications to consume. 

Brief history of VCX 

VCX has been around since 2017 and is one of the first implementations of an Aries protocol-compliant library. Evernym created the original library, which was eventually moved into the Hyperledger Indy SDK project. This was to serve as a reference implementation for integrating with the Indy SDK for the Aries protocols. In 2020, the project was moved into a dedicated Hyperledger project by Absa Group, beginning a new era of development beyond the Indy SDK. 

VCX today provides a DI toolbox with a large suite of functionality that Anonyome and others in the industry use. The toolbox includes: 

DIDComm V1: VCX supports DID Communication V1, allowing end-to-end-encrypted messages to be encoded and decoded between DIDs.  Aries protocols: VCX provides tools for stepping through various agent-to-agent protocols defined by Aries. The protocols implemented in VCX allow the agent to engage with other agents to establish new secure connections, issue or receive credentials, present or verify a presentation of credentials, exchange text-based messages, and more. The latest list of supported protocols is here.  DID management: DIDs are foundational to DI, and VCX has invested time in creating a reliable and clean set of DID management tools for a range of different DID methods. This allows consumers to easily resolve, create and update DIDs involved in their DI interactions. This toolbox is designed with extensibility in mind, allowing new DID methods to be added in the future for further interoperability.   Anonyome’s journey with VCX 

In our pursuit of creating a highly optimized and secure Edge Agent SDK, we wanted to bring into our technology stack the latest cutting-edge DI and Aries libraries. However, given the history we’ve just outlined, VCX in 2022 was highly tethered to the Indy SDK—an SDK that was unfortunately heading towards deprecation at the time. As a strong believer in and adopter of VCX, we set out to join VCX and contribute a major pivot to the project: decoupling VCX from the Indy SDK. This was a major refactor that other Aries projects, such as ACA-Py, also had to work through around this time.  

The changes allowed consumers to plug in and use modern Indy SDK replacement components (Aries Askar, Indy VDR, Anoncreds-rs) instead. In practice, this means users benefit from receiving the latest features and optimizations from these libraries, as well as better interoperability (e.g., a larger range of Decentralized Identifier (DID) methods beyond Indy-based DID methods). 

Shortly after Anonyome’s contribution, in early 2023 we became a co-maintainer of the VCX project and we have worked alongside other individuals and companies such as Absa Group and Instnt. Since joining, Anonyome has contributed to a wide range of aspects in VCX, such as: 

Kickstarting a modern foreign function interface (FFI) wrapper using Mozilla’s UniFFI, allowing the Rust library to be consumed natively from Android and iOS  Implementing some of the latest Aries Interop Protocols (AIP2 credential issuance and presentation messages)  Contributing to the Aries Agent Test Harness on behalf of VCX, an effort that allows VCX to be benchmarked for interoperability with other Aries agents (such as ACA-Py and Credo)  Performing regular maintenance duties: contributing to architectural design decisions, codebase housekeeping, assisting the VCX community, and participating in regular community meetings.  What’s next for VCX? 

VCX has come a long way since its beginnings with Indy SDK: it’s advanced from an Indy reference implementation into a rich and extensible toolbox for DI operations, Aries, DIDs, DIDComm, AnonCreds, and so on. But VCX development is not slowing down, especially since the standards rapidly iterate and grow in the DI ecosystem. 

VCX is keeping its eye on what the community is asking for, and where the ecosystem is heading. A few notable items ahead include: 

DIDComm V2: Currently VCX is using DIDComm V1 for message transport and structuring in the Aries protocols it supports, but the next iteration of the standard—DIDComm V2—is now progressively rolling out into the Aries community. VCX plans to be a part of this transition.  VCX framework: As mentioned, VCX has historically been a lower-level “toolbox” for DI operations, which is great for flexibility but hinders broad adaption. Our co-maintainer and contributors at Instnt are now working on building a framework on top of VCX, an initiative to provide a more application-friendly interface (like ACA-Py and Credo).  DID toolbox enhancements: Since the move away from Indy, VCX has pursued supporting a wider range of DID methods from other blockchain and non-blockchain-based ecosystems, such as did:web and the latest did:peer specification. VCX will continue growing support for DID methods, building a rich and clean toolbox for “all things DIDs”. 

Anonyome is very excited for the future of VCX and we’re glad we were a part of the journey thus far as a co-maintainer. We’d like to give a huge thanks to the co-maintainers and contributors who have made VCX what it is today—open-source thrives most with a diverse community behind it. 

If you’d like to join the VCX efforts, or just hear more about what we’re doing, feel free to join our biweekly community meeting or reach out on Discord

The post Aries VCX: Another Proof Point for Anonyome’s Commitment to Decentralized Identity  appeared first on Anonyome Labs.


SC Media - Identity and Access

Oktane 2024 and the Current State of Identity Security - Harish Peri - ESW #375


KuppingerCole

cidaas access management

by John Tolbert This KuppingerCole Executive View report looks at the issues and options available to IT managers and security strategists to manage identity access to complex IT infrastructures. A technical review of the cidaas access management platform is included.

by John Tolbert

This KuppingerCole Executive View report looks at the issues and options available to IT managers and security strategists to manage identity access to complex IT infrastructures. A technical review of the cidaas access management platform is included.

Thales Group

Alaska awards Thales Driver’s License, ID Card contract with next generation security

Alaska awards Thales Driver’s License, ID Card contract with next generation security prezly Fri, 09/13/2024 - 13:53 Alaska residents will be the first in the U.S. to use translucent polycarbonate driver’s license and ID cards from Thales, a new generation of laser-engraved, polycarbonate card technology for enhanced security protection. With this second consecutive contract, Al
Alaska awards Thales Driver’s License, ID Card contract with next generation security prezly Fri, 09/13/2024 - 13:53 Alaska residents will be the first in the U.S. to use translucent polycarbonate driver’s license and ID cards from Thales, a new generation of laser-engraved, polycarbonate card technology for enhanced security protection. With this second consecutive contract, Alaska Division of Motor Vehicles (DMV) and Thales renew their partnership for up to another 10 years, with nearly 225,000 driver’s licenses and ID cards issued annually.  

The Alaska DMV recently awarded Thales a contract for secure driver’s license and ID card production, including a new leading-edge card format with translucent windows for another level of data security.

This contract allows Alaska to continue providing residents with the highest level of credential security and counterfeit protection through driver’s licenses and ID cards made from 100% polycarbonate, which cannot be physically altered without visibly damaging the card.

For added security, areas of the new card will use translucent windows for clear visibility into the actual structure of the card. The use of highly secure components includes Thales Window Lock technology to imprint a “negative” secondary portrait within the card, making the photo appear as a clear portrait when held up to light. These combined unique security features, along with secure elements and design features (like custom colors and personalized patterns), ensure the Alaska physical credential is both easy to identify and extremely complicated to counterfeit.

In addition to document security design, Thales Cogent Multi Biometric System brings an extra layer of security preventing identity fraud and theft. Indeed, through a back-end biometric data verification upon each citizen enrolment, Thales system insures the authenticity of each citizen identity.

These new Alaska driver’s licenses and ID cards will become available in fall of 2024 across all Alaska DMV sites.

"Thales' proven expertise in document security and groundbreaking features align perfectly with our goal to safeguard Alaskans' identities while delivering top-notch service,” said Lauren Whiteside, Division Operations Manager for the Alaska Division of Motor Vehicles. “This renewed partnership signifies a steadfast dedication to fortifying our credentials and protecting our citizens' personal information against evolving threats."

"Thales looks forward to this next chapter of our partnership with the State of Alaska for providing sophisticated driver’s license solutions,” said Tony Lo Brutto, Vice President for Thales Identity and Biometric Solutions in North America. “We will continue leveraging our industry expertise and enabling Alaska to be at the forefront of secure identity documents.”

About Thales in the USA

In the United States, Thales has conducted significant research and development, manufacturing, and service capabilities for more than 130 years. Today, Thales has 37 locations around the U.S., employing nearly 5,000 people. Working closely with U.S. customers and local partners, Thales is able to meet the most complex requirements for every operating environment.

 

 

 

Contacts Vanessa Viala - Digital Identity & Security Press Officer 13 Sep 2024 Digital Identity and Security Government Type Press release Structure Digital Identity and Security United States The Alaska DMV recently awarded Thales a contract for secure driver’s license and ID card production, including a new leading-edge card format with translucent windows for another level of data security. prezly_688772_thumbnail.jpg Hide from search engines Off Prezly ID 688772 Prezly UUID a3286110-f356-443f-b706-c4157f385ba8 Prezly url https://thales-group.prezly.com/alaska-awards-thales-drivers-license-id-card-contract-with-next-generation-security Fri, 09/13/2024 - 15:53 Don’t overwrite with Prezly data Off

Thales partners with Dstl and defence SMEs Catalyst and DCE to create a new hybrid testing environment for crewed and uncrewed platforms

Thales partners with Dstl and defence SMEs Catalyst and DCE to create a new hybrid testing environment for crewed and uncrewed platforms Language English simon.mcsstudio Fri, 09/13/2024 - 10:06 Thales in the UK, under contract from Dstl, are leading the way with crewed-uncrewed integration, developing a system of systems digital twin environ
Thales partners with Dstl and defence SMEs Catalyst and DCE to create a new hybrid testing environment for crewed and uncrewed platforms Language English simon.mcsstudio Fri, 09/13/2024 - 10:06

Thales in the UK, under contract from Dstl, are leading the way with crewed-uncrewed integration, developing a system of systems digital twin environment for experimentation on the operation of Land Robotics and Autonomous Systems (RAS). Research is ongoing for the project entitled, ‘Land Digital Robotics and Autonomous Systems Integration Capability’ (L-DRIC), and the consortium of Thales, Catalyst and DCE are making great progress towards final live trials and demonstrations which are scheduled for early 2025. 

L-DRIC is a hybrid ecosystem, allowing operators and researchers to utilise a common architecture and interface to plan and experiment in both the virtual and physical domain. The platform enables the operation of virtual and physical systems through one interface, which allows for endless experimentation opportunities. The development of L-DRIC will initially be designed to enable the exploration of RAS in the beyond visual line of sight (BVLOS) reconnaissance role. It will also provide a platform for better understanding the contribution of RAS to the Army’s intent to fight by recce-strike at all levels. Ultimately, the aim of the programme is to enable early experimentation in the virtual domain, and allow for more effective multi-domain integration through the evolution and extension of existing open architectures and research in the combination of crewed and uncrewed systems. This will reduce the risks, costs and timescales associated with the introduction of new systems and concepts into the armed forces through embracing spiral development. For this project, the facility will incorporate Thales’ DigitalCrew and other AI enablers to better understand the system effectiveness of such platforms and the benefits that they provide through a reduction of cognitive burden on the operator.

Catalyst has deep expertise in electronic architectures, synthetic environment modelling, simulation and experimentation. This expertise has been used to create a synthetic environment and digital twins of all physical platforms. DCE specialise in open architectures, developing technologies for robotic and autonomous systems, and is extending its Marionette control system for command and control (C2) of the project’s uncrewed ground vehicles. Thales, Catalyst and DCE are working to develop a system designed to include various crewed and uncrewed vehicles for multi-domain experimentation and testing. 

L-DRIC consists of a physical mobile crewed platform, running a generic vehicle architecture (GVA) equipped with in service and next generation Thales optronics sensors. Through the use of mission planning software, the mobile crewed platform (or any accredited user on the network) can control and view the movement of the uncrewed, autonomous vehicles. All systems and vehicles have a digital twin in the virtual world. Thales and partners’ open architecture experience will allow L-DRIC to inform both new and existing programmes, improving efficiencies for defence procurement and, subsequently, reducing the cost and time of physical trials. Furthermore, Thales offers a platform, sensor, and software agnostic approach to integration, boosting the cost efficiencies associated with the research programme and final experimentation system. 

Robotic and Autonomous Systems are transforming warfare, but are rapidly evolving.  The open and modular architectures developed in this project should better enable Army to rapidly adapt, integrate emerging RAS technologies with in service platforms such as Ajax at the pace of relevance. This, together with the use of digital twin environments, should provide a critical enabler to reduce risks, costs and timescales associated with integration and spiral development of RAS capability into the force.

Guy Powell, Principal Advisor - Land Autonomy, Dstl

The world of uncrewed vehicles is rapidly evolving with many systems moving away from traditional, crewed fleets. L-DRIC enables the customer and end users to build on the vast open architecture experience of Thales and its partners, Catalyst and DCE, to extend exploration into autonomous systems and robotics. By incorporating artificial intelligence and Thales’ DigitalCrew, L-DRIC reduces the burden of key decision makers on the battlefield, leading the way in bridging the gap between crewed and uncrewed systems. Thales’ DigitalCrew will be deployed on UGVs, UAVs and command vehicles to autonomously detect and classify objects of interest. This information will be shared with the wider network to create a Common Operating Picture (COP). 

In recent times, the growing significance of autonomy and artificial intelligence has become increasingly apparent, reshaping the landscape of security strategies worldwide. This important research partnership between Dstl, Thales and multiple SMEs will advance the UK’s understanding of digital twins and open architectures and explore how crewed, optionally crewed and uncrewed systems can co-exist in complex, multi-domain architectures.

Stephen McCann, Managing Director, Thales in the UK
 

/sites/default/files/database/assets/images/2024-09/RAS-Banner.jpg 18 Sep 2024 United Kingdom Thales in the UK, under contract from Dstl, are leading the way with crewed-uncrewed integration, developing a system of systems digital twin environment for experimentation on the operation of Land Robotics and Autonomous Systems (RAS)… Type News Hide from search engines Off

KuppingerCole

Decentralized Identity: Potential for Breakthrough Innovation

by Martin Kuppinger Decentralized Identity (DCI) has evolved over more than a decade and is reaching the tipping point for widespread adoption and triggering massive innovation in how businesses and governments interact with customers, consumers, employees, or citizens. From centralized identity siloes to decentralized identity wallets DCI, also referred to as SSI (Self-Sovereign Identity), i

by Martin Kuppinger

Decentralized Identity (DCI) has evolved over more than a decade and is reaching the tipping point for widespread adoption and triggering massive innovation in how businesses and governments interact with customers, consumers, employees, or citizens.

From centralized identity siloes to decentralized identity wallets

DCI, also referred to as SSI (Self-Sovereign Identity), is a concept that differentiates fundamentally from established models. Commonly, organizations manage identities of the individuals in their own systems, creating siloes of identities and causing individuals to register with many different parties. Everyone experiences this on an almost daily basis when using the Internet. While some identities such as the ones of LinkedIn, Facebook, Google, or Apple can be reused, they still are centralized and not ubiquitous.

In contrast, DCI leaves the identity and its attributes with the individual. Based on standards, that information can be flexibly exchanged with other parties. So called verifiable credentials (VCs) provide information for instance about the name, the email address, the postal address, the employer, the employment status, or any other information. The concept of DCI is open and does not limit what could be provided with VCs. This is essential, because this enables using DCI for any type of use case, especially because also things, devices, or organizations could (and will, over time) have their decentralized identities.

DCI builds on a concept of issuers that issue VCs, holders – commonly the individuals – that hold VCs, and verifiers that consume VCs. The VCs are stored by the individual in so-called wallets. Over time, the term wallet may turn out to be misleading, because we potentially will have way more information in the form of VCs in the wallet than we have cards in our wallets today. Also, the use cases will become much broader.

Decentralized identity: More than just verification, onboarding and authentication

DCI today is frequently seen as a means for having a verified identity, based on human-assisted or fully automated IDV (Identity Verification) processes, on hand that is reusable. This enables trusted interactions with other parties such as organizations or governmental agencies.

The VCs then provide additional data and can for instance simplify the onboarding process such as registering with an eCommerce site. Based on the verified identity, the secure wallet, and the ability to open that wallet, authentication processes can become simplified.

However, looking just at these aspects is only scratching the surface of the potential that DCI holds. The potential is much bigger. VCs can be used for process automation and optimization. Envision onboarding of externals to a project. This process can become fully automated based on the name, the employer, the employment status and some other information. Or envision applying for a loan at a bank, based on other VCs, ranging from the verified identity to the monthly salary statements, marital status, proof of existing real estate, and so on. The costly AML (Anti Money Laundering) and KYC (Know Your Customer) processes in banks would sink massively, as well as the cost for approving (or rejecting) loans. Process cost optimization is a massive potential of DCI.

But there is more. Consent could be managed by VCs that allow the use of certain information by defined parties for a defined purpose and limited time. People could share health data in a controlled manner as VCs. The potential is virtually infinite and allows for breakthrough innovation in the digital economy.

Breakthrough potential: Disruption in business that does not break IT

DCI can become disruptive to the business, with organizations that leverage the potential of DCI winning by delivering new, innovative services, but also optimizing their processes and thus cost. We expect that with the recent eIDAS 2.0 regulation, which amongst other changes mandates EU member states to provide DCI wallets, the EU DI wallets (EU Decentralized Identity) to every citizen and to adopt this technology for eGovernment use cases, there is a driver for significantly increasing the speed in adopting DCI approaches. These wallets are a foundation for implementing further DCI use cases.

Fortunately, disruption in business does not equal disruption in IT. DCI adds to what exists. When a customer is registered via DCI and purchases goods, this is still reflected by records in the ERP system of the organization. When someone is onboarded, there still might be an entry in an internal directory.

Just adding DCI to the forefront of the organization will not allow leveraging the full potential, though. Consuming VCs to make decisions, from access authorizations to process automation, requires changes in the backends. In many cases, this will be an evolutionary process.

With the immense potential of DCI, it is the latest time that organizations start evaluating that potential and think about the innovation that it can bring to their business or the way governments serve their citizens. This must involve everyone in the organization, not just the identity team.

As a guest of Ergon Informatik, Martin Kuppinger, Principal Analyst at KuppingerCole Analysts, will talk about this topic more in depth at the it-sa Expo & Congress in Nuremberg on October 23rd.


Metadium

Dear Community,

Dear Community, We are pleased to share the latest update on Metadium’s progress with CertiK Skynet. In our commitment to the continuous development and trust of the Metadium project, we prioritize enhancing security and transparency. As part of this effort, Metadium has recently completed a security audit and KYC certification with CertiK Skynet. What is CertiK Skynet? CertiK S

Dear Community,

We are pleased to share the latest update on Metadium’s progress with CertiK Skynet.

In our commitment to the continuous development and trust of the Metadium project, we prioritize enhancing security and transparency. As part of this effort, Metadium has recently completed a security audit and KYC certification with CertiK Skynet.

What is CertiK Skynet?

CertiK Skynet is a platform that monitors and evaluates the security and reliability of blockchain and cryptocurrency projects in real-time. It provides services related to security audits of smart contracts and blockchain systems. Skynet focuses on continuously monitoring each project’s smart contracts and detecting potential threats.

Smart Contract Audits: CertiK rigorously reviews and analyzes the code of smart contracts to identify vulnerabilities and weaknesses that malicious actors could exploit. This process ensures that blockchain projects are secure and trustworthy. Penetration Testing: The company conducts thorough penetration testing to simulate potential attacks, safeguarding blockchain systems from hacks and security breaches. Security Monitoring: CertiK offers ongoing monitoring of blockchain projects to identify and address potential threats in real time. Skynet: CertiK’s automated security and monitoring tool provides real-time insights, on-chain monitoring, and automated auditing.

Smart contracts are a core technology in cryptocurrency projects, essential to enhance project efficiency, transparency, and trustworthiness. Through this technology, projects can operate autonomously and offer users and investors a high level of security.

Key Achievements:

CertiK Security Score increased by 5.88 points. Security Score Rank rose by 513 positions. Obtained KYC certification badge. Key Highlights:

CertiK Skynet Audit: Metadium has confirmed the safety of its platform’s code and systems through a thorough security audit by CertiK Skynet. Twenty-nine items were approved and improved during this audit, and the code audit score increased by 23.68 points.

KYC Certification:

Additionally, Metadium has enhanced the transparency of its platform operations through CertiK Skynet’s KYC certification process. KYC certification is a critical procedure that verifies the project team’s identity and assesses compliance with anti-money laundering (AML) regulations. CertiK’s KYC service maintains the highest standards of data protection while providing rigorous scrutiny of the project team’s personal identity and background.

CertiK’s investigators validate cryptocurrency development teams and award a “KYC Badge” to those who successfully pass the due diligence process. This badge enhances the project team’s accountability and trustworthiness while reducing and mitigating risks of fraud and abuse. Metadium has obtained this badge, demonstrating its adherence to laws and regulations.

CertiK Skynet Score:

As a result of all these processes, Metadium’s CertiK Skynet rank and score have improved. This score reflects a comprehensive evaluation of Metadium’s security, stability, and public aspects, reaffirming the project’s technical excellence and reliability to the market.

The Metadium team is committed to continuing to build an even safer and more reliable platform. The audit and certification through CertiK Skynet are just the beginning, and we will consistently strive to maintain your trust.

Thank you for your continued support.

Metadium Team

메타디움 CertiK Skynet 업데이트 소식을 전해드립니다.

메타디움 프로젝트의 지속적인 발전과 신뢰를 위해, 우리는 보안과 투명성 강화를 최우선 과제로 삼고 있습니다. 이러한 노력의 일환으로 메타디움은 최근 CertiK Skynet에서 보안 감사 및 KYC 인증을 성공적으로 완료하였습니다.

CertiK Skynet이란?

CertiK Skynet은 블록체인 및 암호화폐 프로젝트의 보안 및 신뢰성을 실시간으로 모니터링하고 평가하는 플랫폼 입니다. 스마트 계약과 블록체인 시스템의 보안 감사와 관련된 서비스를 제공합니다. Skynet은 각 프로젝트의 스마트 계약을 지속적으로 모니터링하고 잠재적인 위협을 감지하는 데 중점을 둡니다.

스마트 계약 감사: CertiK는 스마트 계약의 코드를 엄격하게 검토하고 분석하여 악의적인 공격자들이 악용할 수 있는 취약점을 식별합니다. 이 과정은 블록체인 프로젝트의 보안성과 신뢰성을 보장합니다. 침투 테스트: 회사는 잠재적인 공격을 시뮬레이션하여 블록체인 시스템을 해킹과 보안 침해로부터 보호하는 철저한 침투 테스트를 수행합니다. 보안 모니터링: CertiK는 블록체인 프로젝트를 실시간으로 모니터링하여 잠재적인 위협을 식별하고 대응합니다. Skynet: CertiK의 자동화된 보안 및 모니터링 도구는 실시간 인사이트, 온체인 모니터링, 자동화된 감사를 제공합니다.

주요 성과

CertiK Security Score 5.88점 상승 Security Score Rank 513계단 상승 KYC 인증 배지 획득

주요 내용

CertiK Skynet Audit:

메타디움은 CertiK Skynet의 철저한 보안 감사(Audit)를 통해 플랫폼의 코드와 시스템의 안전성을 확인받았습니다.

이번 총 29개의 항목에 대해 Audit을 진행했으며, 코드 점수가 23.68점 상승했습니다.

KYC 인증

또한, 메타디움은 CertiK Skynet의 KYC 인증 절차를 통해 플랫폼 운영의 투명성을 높였습니다.

KYC 인증은 프로젝트 팀의 신원을 확인하고, 자금 세탁 방지(AML) 규정을 준수하는지를 평가하는 중요한 절차입니다.

CertiK의 KYC 서비스는 가장 높은 수준의 데이터 보호 표준을 유지하는 동시에 엄격한 심사 과정을 통해 프로젝트팀의 개인 신원 및 배경 검증을 제공합니다.

CertiK의 자체 조사관은 암호화폐 개발 팀을 검증하여 실사 과정을 성공적으로 통과한 팀에게 “KYC 배지”를 제공합니다. 이 배지는 프로젝트 팀의 책임성과 신뢰를 높이는 동시에 사기 및 남용 위험을 줄이고 완화합니다.
메타디움은 이번 인증을 통해 KYC 배지를 획득했고, 메타디움이 법규와 규정을 준수하고 있음을 입증하게 되었습니다.

CertiK Skynet Score

이 모든 과정의 결과로, 메타디움의 CertiK Skynet 점수 및 랭킹이 향상되었습니다.

이 점수는 메타디움 프로젝트의 보안성, 안정성, 공공성을 종합적으로 평가한 결과로, 메타디움의 기술적 우수성과 신뢰성을 시장에 다시 한번 입증한 것입니다.

저희 메타디움 팀은 앞으로도 더욱 안전하고 신뢰할 수 있는 플랫폼을 만들기 위해 최선을 다할 것입니다. CertiK Skynet을 통한 감사와 인증은 그 첫걸음일 뿐, 앞으로도 여러분의 신뢰를 저버리지 않도록 꾸준히 노력하겠습니다.

감사합니다.

메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

Dear Community, was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 12. September 2024

KuppingerCole

The Security You Need: Seamlessly Integrating PAM and IGA for Ultimate Protection

In today's rapidly evolving cybersecurity landscape, organizations face significant challenges in integrating Privileged Access Management (PAM) and Identity Governance and Administration (IGA) systems. The complexity of integration, especially with legacy systems, coupled with the need to scale for cloud environments, poses substantial hurdles for IT professionals seeking to enhance their securit

In today's rapidly evolving cybersecurity landscape, organizations face significant challenges in integrating Privileged Access Management (PAM) and Identity Governance and Administration (IGA) systems. The complexity of integration, especially with legacy systems, coupled with the need to scale for cloud environments, poses substantial hurdles for IT professionals seeking to enhance their security posture.

Modern technology offers solutions to these challenges through unified identity platforms. These platforms enable organizations to manage security from on-premises to cloud environments with modular, integrated solutions across IGA, IAM, PAM, and Active Directory Management and Security. By leveraging API-first approaches and identity correlation systems, businesses can achieve seamless integration, reduce operational risks, and support agile just-in-time scenarios.

Paul Fisher, Lead Analyst at KuppingerCole, will discuss the latest trends in PAM and IGA integration, highlighting the importance of a unified approach to identity security. He will explore the challenges organizations face in implementing these systems and offer insights into overcoming common obstacles, ensuring compliance, and maintaining robust governance in an ever-changing threat landscape.

Jason Moody, Global Product Marketing Manager, PAM, and Bruce Esposito, Global Product Marketing Manager, IGA, both from One Identity, will showcase their Unified Identity Platform. They will demonstrate how this solution addresses identity sprawl, enhances business agility, and supports both internal and external users. The speakers will also highlight One Identity's approach to integrating PAM and IGA, emphasizing its flexibility and scalability.




Finicity

Nacha’s Preferred Partner offerings evolve to include open banking and account validation

As governor of the automated clearing house (ACH) Network that moves $80 trillion in funds electronically each year, U.S. payments industry association Nacha has been moving payments forward for 50… The post Nacha’s Preferred Partner offerings evolve to include open banking and account validation appeared first on Finicity.

As governor of the automated clearing house (ACH) Network that moves $80 trillion in funds electronically each year, U.S. payments industry association Nacha has been moving payments forward for 50 years. In recognition of the tremendous, data-driven changes shaping the industry in just the last few years, Nacha updated the categories for its Preferred Partner Program.

Nacha selects Preferred Partners, including Mastercard, whose payments technology offerings align with Nacha’s network advancement strategy. Mastercard Open Banking services are provided by Finicity, which has been a Nacha preferred partner in all partner solutions categories — previously defined as Compliance, Risk and Fraud Prevention, and ACH Experience — since 2020.

Going forward, Mastercard will continue to provide advanced, secure and trusted payment solutions as a Nacha Preferred Partner in three key areas: Risk and Fraud Prevention, as well as new categories Account Validation and Open Banking. These solutions are integral to the future of digital payments.

The power of consumer-permissioned data

Account-to-account (A2A) consumer bill payments and transfers totaled $9 trillion in 2023, and continue to grow at a 7% compound annual rate, according to Nacha, driven by consumers’ choice for fast and convenient payment options. Failed payments and fraudulent charges can be costly and take time to resolve. So it’s critically important to protect A2A payments with insights and analytics that keep risk and cost to a minimum.

Ensuring secure and successful digital payments starts with a robust account validation process to verify critical details like account type, ownership and balance information. These solutions not only help optimize payments, reduce risk and lower costs for fintechs and merchants, they enable the safe and seamless payment experiences that end users demand. Mastercard Open Banking for Payments solutions include:

Account Owner +: Verify identity by analyzing risk signals, insights and scores related to personal information, device details and IP addresses. Account Payment Details: Retrieves account and routing numbers and indicates real-time payment availability. Balances: Gathers insights from cleared and available balances and time stamps, with a dynamic recency setting. Payment Success Indicator: De-risks payments with predictive insights from a weighted, multifactor settlement risk score.

Mastercard’s advanced global network and decades of experience in risk and fraud prevention can help fintechs and merchants make smarter decisions in a fast-moving digital payments landscape. Ultimately, we strive to help our customers, partners and end users realize all the benefits of next-generation A2A payment technologies with the lowest possible risk.

To learn more about Mastercard Open Banking for Payments, click here.

The post Nacha’s Preferred Partner offerings evolve to include open banking and account validation appeared first on Finicity.


Spruce Systems

Meet the SpruceID Team: Parke Hunter

Parke, SpruceID’s marketing manager, combines marketing expertise and customer focus to help drive success.
Name: Parke Hunter
Team: Marketing
Based in: Denver, Colorado About Parke

After getting my marketing degree from Virginia Tech (Go Hokies!), I landed my first job selling commercial insurance at GEICO—fun fact: I got to be the GEICO Gecko for a day.

I then transitioned into working in software implementation and customer success at a food service tech company. Still wanting to pursue a career in marketing while being able to continue working closely with the product development team and customers, I found my love for product marketing. I went on to work as a product marketing manager for a range of products (from data analytics software tools to Atlassian’s app development platform) for five years at Alteryx, Sisense, and Atlassian.

I started at SpruceID last year and have loved every minute of it! It's exciting to see how the company has grown throughout my time here, and I have had the opportunity to experiment and try my hand at other areas of marketing that I may not have been as familiar with before.

Parke as GEICO Gecko Can you tell us about your role at SpruceID?

At SpruceID, my role spans managing our content funnel, social media, and customer highlights/case studies and helping support certain events such as hackathons, business development, and website updates. We are also gearing up to build out our product marketing function, which I am looking forward to.

What do you find most rewarding about your job?

What’s most rewarding about my job is that I feel that my work really impacts our company and mission. I feel driven and motivated by how our products help people.

Also, I may be biased, but our team is the best. SpruceID is made up of some of the smartest, kindest, and most fun individuals I have ever met. They are supportive, encouraging, and come together to work as a team and achieve a goal in a way I have never seen before.

What is the most important quality for someone in your role to have?

I think that the most important quality in a marketer is curiosity. 

Curiosity for understanding customers and personas, as well as the industry you're in, spotting trends in data, problem-solving, and adapting to change in case business needs shift and you have to learn new skills.

What has been the most memorable moment for you at SpruceID so far?

There have been so many it’s hard to choose!! One certainly stands out, though. At our fall 2023 offsite in Dublin, I was plucked from the crowd in an Irish pub to do an Irish jig on stage in front of hundreds of locals (and the entire company who I had just met in person for the first time!).

The moment we launched the California mDL was also a special and memorable moment for me.

How do you define success in your role, and how do you measure it?

There are so many ways our marketing team defines and measures success, from top to bottom of funnel.

We measure everything from brand awareness to lead generation, revenue growth, content engagement metrics, customer feedback, and awards/recognition, just to name a few. In marketing, we are also constantly evaluating the competitive landscape and understanding where we fit into it. As SpruceID grows, I know we’ll track more success metrics.

I am data and metrics-driven, and I define success in my role by the impact my work has on driving measurable results. Success to me means continuously learning, improving, and contributing to SpruceID's overall growth and strategic goals.

Fun Facts

What do you enjoy doing in your free time? In my free time, you can find me road-tripping, hiking or snowshoeing as one does in Colorado, watching reality TV, studying (I am currently getting my master's degree online), and hanging out with friends! I recently started Denver’s first “Food Critics Club” with a group of friends. We set out to taste-test a certain type of food (e.g., all of the croissants or empanadas in Denver) and have a picnic to try them all and rate them. That has been a blast!

If you could be any tree, what tree would you be and why? I would be a palm tree! Calm, resilient, and adaptable. Palm trees seem relaxed, go with the flow, and thrive in the sun (like me), but they are also much tougher than they seem and can weather wind and storms.

Interested in joining our team? Check out our open roles and apply online!

Join Our Team

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


SC Media - Identity and Access

Lehigh Valley Health Network to settle breach class-action for $65M

ALPHV/BlackCat's separate leaks of information stolen from LVNH, including nude photographs of cancer patients that were unknowingly captured in some instances, following the organization's refusal to pay the demanded ransom constituted a violation of the Health Insurance Portability and Accountability Act, according to the lawsuit.

ALPHV/BlackCat's separate leaks of information stolen from LVNH, including nude photographs of cancer patients that were unknowingly captured in some instances, following the organization's refusal to pay the demanded ransom constituted a violation of the Health Insurance Portability and Accountability Act, according to the lawsuit.


What security teams need to know about HIPAA compliance in the cloud

The three elements of HIPAA compliance in the cloud include data discovery, encryption and strong access control and identity management.

The three elements of HIPAA compliance in the cloud include data discovery, encryption and strong access control and identity management.


KuppingerCole

Nov 19, 2024: Identity Security and Management – Why IGA Alone May Not Be Enough.

Organizations are confronted with unprecedented challenges in managing and securing identities across hybrid environments due to the growing complexity of the digital landscape. While Identity Governance and Administration (IGA) solutions provide a foundation, the increasing complexity of identity ecosystems demands a more comprehensive approach to maintain visibility and control.
Organizations are confronted with unprecedented challenges in managing and securing identities across hybrid environments due to the growing complexity of the digital landscape. While Identity Governance and Administration (IGA) solutions provide a foundation, the increasing complexity of identity ecosystems demands a more comprehensive approach to maintain visibility and control.

Ocean Protocol

DF106 Completes and DF107 Launches

Predictoor DF106 rewards available. DF107 runs Sept 12 — Sept 19, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 106 (DF106) has completed. DF107 is live today, Sept 12. It concludes on September 19. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE&n
Predictoor DF106 rewards available. DF107 runs Sept 12 — Sept 19, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 106 (DF106) has completed.

DF107 is live today, Sept 12. It concludes on September 19. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF107 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF107

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF106 Completes and DF107 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

KYC (Know Your Customer) Checklist: Simplified

Achieve KYC compliance with our comprehensive checklist, including documents, best practices, and identity verification tips.

Know Your Customer (KYC) programs are a way for financial institutions to verify the identity of their clients. Not only does it help ensure compliance with government regulations, but KYC is also an important step in preventing fraud and other illegal financial activities. Without it, businesses in the financial sector could be subject to government penalties and a loss of customer trust. In this article, we’ll take a deeper look at KYC best practices and run through an easy-to-understand compliance checklist.

Wednesday, 11. September 2024

Microsoft Entra (Azure AD) Blog

Omdia’s perspective on Microsoft’s SSE solution

In July, we announced the general availability of the Microsoft Entra Suite and Microsoft’s Security Service Edge (SSE) solution which includes Microsoft Entra Internet Access and Microsoft Entra Private Access.     Microsoft’s vision for SSE   Microsoft’s SSE solution aims to revolutionize the way organizations secure access to any cloud or on-premises applications. It unif

In July, we announced the general availability of the Microsoft Entra Suite and Microsoft’s Security Service Edge (SSE) solution which includes Microsoft Entra Internet Access and Microsoft Entra Private Access.  

 

Microsoft’s vision for SSE

 

Microsoft’s SSE solution aims to revolutionize the way organizations secure access to any cloud or on-premises applications. It unifies identity and network access through Conditional Access, the Zero Trust policy engine, helping to eliminate security loopholes and bolster your organization’s security stance against threats. Delivered from one of the largest global private networks, the solution ensures a fast and consistent hybrid work experience. With flexible deployment options across other SSE and networking solutions, you can choose to route specific traffic profiles through Microsoft’s SSE solution.

 

Omdia's perspective

 

According to Omdia, a leading research and consulting firm, Microsoft’s entry into the SASE/SSE space is poised to disrupt the market. Omdia highlights that Microsoft’s focus is on an identity-centric SASE framework, which helps consolidate technologies from different vendors by extending identity controls to your network and enhancing team collaboration. A key strength for Microsoft, according to Omdia, is its ability to introduce Microsoft Entra Internet Access and Microsoft Entra Private Access seamlessly into existing identity management conversations—a strength that could lead to broader adoption of network access services as part of the same platform.

 

Conclusion

 

As you navigate the complexities of securing network access, Microsoft’s Security Service Edge solution helps you transform your security posture and improve user experience. It simplifies collaboration between identity and network security teams by consolidating access policies across identities, endpoints and network, all managed in a single portal - the Microsoft Entra admin center. Microsoft’s SSE solution provides a new pathway to implement zero trust access controls more effectively, enabling your organization to enhance its security posture while leveraging existing Microsoft investments.

 

To learn more about Omdia’s perspective on Microsoft’s SSE solution, read Omdia’s report, Microsoft announces general availability of its SASE/SSE offering.

 

Learn more and get started 

 

Stay tuned for more Security Service Edge blogs. For a deeper dive into Microsoft Entra Internet access and Microsoft Entra Private Access, watch our recent Tech Accelerator product deep dives.

 

To get started, contact a Microsoft sales representative, begin a trial, and explore Microsoft Entra Internet Access and Microsoft Entra Private Access general availability. Share your feedback to help us make this solution even better. 

 

Nupur Goyal, Director, Identity and Network Access Product Marketing 

 

 

Read more on this topic

Simplify your Zero Trust strategy with the Microsoft Entra Suite and unified security operations platform, now generally available  Microsoft’s Security Service Edge products now in General Availability  Microsoft Entra Internet Access Microsoft Entra Private Access

 

Learn more about Microsoft Entra

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.

Microsoft Entra News and Insights | Microsoft Security Blog⁠Microsoft Entra blog | Tech CommunityMicrosoft Entra documentation | Microsoft Learn Microsoft Entra discussions | Microsoft Community 

 


auth0

All You Need To Know About Passkeys at Auth0!

There are so many resources out there about passkeys and each vendor has its own implementation of the standard. Let’s answer some of your frequently asked questions about passkeys at Auth0!
There are so many resources out there about passkeys and each vendor has its own implementation of the standard. Let’s answer some of your frequently asked questions about passkeys at Auth0!

Indicio

Biometric digital identity travel and hospitality Prism report

Prism The post Biometric digital identity travel and hospitality Prism report appeared first on Indicio.

SC Media - Identity and Access

National Public Data breach underscores the need for stronger digital identities

Here are five ways to strengthen digital identities.

Here are five ways to strengthen digital identities.


Misconfiguration exposes MNA Healthcare data

Such database misconfiguration has leaked healthcare professionals' full names, birthdates, phone numbers, addresses, email addresses, work experiences, assigned jobs, communications with MNA Healthcare, hashed temporary passwords, and encrypted Social Security numbers.

Such database misconfiguration has leaked healthcare professionals' full names, birthdates, phone numbers, addresses, email addresses, work experiences, assigned jobs, communications with MNA Healthcare, hashed temporary passwords, and encrypted Social Security numbers.


Ontology

Ontology Weekly Report: September 3rd — 9th, 2024

Ontology Weekly Report: September 3rd — 9th, 2024 Ontology At Ontology, we’re continuing to engage closely with our community, ensuring consistent communication and collaboration. Here’s what’s been happening: Community Call and Privacy Hour Our regular Community Call and Privacy Hour took place as planned, fostering open conversations on decentralized identity and privacy. If you missed
Ontology Weekly Report: September 3rd — 9th, 2024 Ontology

At Ontology, we’re continuing to engage closely with our community, ensuring consistent communication and collaboration. Here’s what’s been happening:

Community Call and Privacy Hour
Our regular Community Call and Privacy Hour took place as planned, fostering open conversations on decentralized identity and privacy. If you missed it, catch up with the recording here. ONTO Wallet New Node Registration Tutorial
Stay on top of your game! We’ve released a new video tutorial on how to register a node, making it easier than ever to get started. Joining the Exocore Ecosystem
ONTO Wallet is now a part of the Exocore ecosystem, reinforcing our commitment to providing top-tier decentralized solutions. Orange Protocol ENS on Base Campaign
We’re excited to celebrate ENS’s expansion to the Base chain, a major step toward bringing billions of people onchain! You can now mint and manage ENS subnames directly on Base with lower gas fees. In collaboration with the artist MEK, we’ve unveiled artwork capturing this milestone. This campaign boosts the integration of ENS as a digital identity in decentralized applications. Don’t miss out — join the campaign today! Community

Engagement is at the heart of what we do. This week, we kept the momentum going with interactive sessions and fun activities:

Wordle Game
We hosted our first-ever Wordle game during this week’s discussions, and it was a hit! Due to its success, it will now become a monthly feature. Special thanks to our hosts, SasenDish and Iamfurst, for their energy! Telegram Community Discussion
The Ontology French Telegram channel hosted a session on the history of crypto, focusing on the Mt. Gox collapse. Special thanks to Mathus95 for his valuable insights. Publications

Check out our latest articles for deep dives into critical Web3 issues:

Decentralized Identity and Reputation: Balancing Freedom and Regulation
Discover how decentralized identity systems can protect privacy while addressing the need for regulation. Real-world examples like Silk Road and Tornado Cash illustrate the challenges and solutions. Read more.
With transparency and engagement, we could create a system that balances freedom with responsibility.
Mark Cuban’s Challenge to Trump Supporters
This article highlights Mark Cuban’s comments and their relevance to the echo chambers in venture capital. Read here.
As we continue to develop Web3 technologies, let’s push for a world where investor reputations and venture capital histories are public, verifiable, and untouchable by spin.
Stay Connected

Stay engaged and informed by following us on our social media channels. Your participation is essential as we continue to build a more secure and inclusive digital world together.

Ontology Website / ONTO Website / OWallet (GitHub) Twitter / Reddit / Facebook / LinkedInYouTube Telegram Announcements / Telegram EnglishDiscord

Ontology Weekly Report: September 3rd — 9th, 2024 was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Protecting Cloud Environments at Scale

by Dominik Sowinski In today’s cloud-driven world, securing digital infrastructure is more challenging than ever. With advanced persistent threats (APTs) on the rise and global conflicts intensifying cyber risks, adapting cloud security strategies is essential. At cyberevolution 2024, Dominik Sowinski, Cybersecurity Architect at Siemens AG, will explore how organizations can fortify their cloud e

by Dominik Sowinski

In today’s cloud-driven world, securing digital infrastructure is more challenging than ever. With advanced persistent threats (APTs) on the rise and global conflicts intensifying cyber risks, adapting cloud security strategies is essential. At cyberevolution 2024, Dominik Sowinski, Cybersecurity Architect at Siemens AG, will explore how organizations can fortify their cloud environments against emerging threats.

Dominik’s talk will cover the latest attack trends and offer strategies for protecting cloud infrastructures at scale. He’ll delve into how AI, automation, and secure architecture can help mitigate risks, while highlighting best practices for building a resilient cloud security framework.

For professionals tasked with safeguarding their organization's cloud operations, this session is a must. Don’t miss out on the opportunity to stay ahead of evolving threats in today’s dynamic cybersecurity landscape.


Metadium

Explorer Update

Dear Community, We are excited to announce that the Metadium Explorer website has been updated. A new feature has been added to the Token Transfer menu, limiting data beyond the offset range. This will allow you to access data more reliably, improving the overall user experience. Metadium will continue to prioritize your convenience and security as we make ongoing improvements. Thank you. 안녕하세

Dear Community,

We are excited to announce that the Metadium Explorer website has been updated. A new feature has been added to the Token Transfer menu, limiting data beyond the offset range. This will allow you to access data more reliably, improving the overall user experience.

Metadium will continue to prioritize your convenience and security as we make ongoing improvements.

Thank you.

안녕하세요, 메타디움 커뮤니티 여러분!

최근 메타디움 익스플로러 웹사이트에 업데이트가 진행되었습니다. Token Transfer 메뉴에서 화면에서 표시되는 오프셋 데이터 범주를 넘어 데이터를 조회하는 것을 제한하는 기능이 추가되었습니다. 이로 인해 데이터를 더 안정적으로 확인하실 수 있으며, 더욱 원활한 사용자 경험이 가능해졌습니다.

앞으로도 메타디움은 여러분의 편의성과 보안을 최우선으로 생각하며 지속적인 개선을 이어나가겠습니다.

메타디움 커뮤니티 여러분의 지속적인 관심과 지원에 감사드리며, 앞으로도 많은 성원 부탁드립니다

감사합니다.

메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

Explorer Update was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

What is Dynamic Access Control? Ties to Authorization

Benefits of dynamic access control and how it works, with a focus on its role in financial services and key features for improved access management

Introduced as part of Windows Server 2012, Dynamic Access Control (DAC) enables administrators to regulate network access based on a number of dynamic variables. For instance, dynamic access control can grant a user access to network resources while on a private internet connection, but restrict their access if they’re on a public wi-fi network. This makes dynamic access control well-suited to meeting the demands of modern access management. Financial service providers can use dynamic access control to enhance their data governance in a way that doesn’t interfere with the user experience.


BlueSky

Share video on Bluesky!

Bluesky now has video!

After much anticipation, you can now share videos on Bluesky! Let’s dive right into the quick facts.

Quick facts Each post can contain one video. Videos can be up to 60 seconds long. Bluesky currently supports .mp4, .mpeg, .webm, and .mov video files. By default, videos will auto-play. You can turn off auto-play in Settings.

Update to version 1.91 of the mobile app or refresh desktop to begin watching video on Bluesky. We're rolling out the ability to post video gradually to ensure a smooth experience.

Some more details You can attach subtitles to your video. Currently, you can upload 25 videos / 10 GB of video per day. We may tweak this limit.

At Bluesky, the product team works hand-in-hand with Trust & Safety to develop new features. Here’s the safety tooling available with video:

You must verify your email before you can upload a video. This is one step to decrease spam and abuse with video. You can apply labels to your own videos, for example, for adult content. You can submit reports to Bluesky’s moderation team for posts with video. These posts may be labeled or taken down. Video that contains illegal content will be purged from our infrastructure. For users that repeatedly violate our community guidelines with video content, Bluesky’s moderation team may remove your ability to upload videos. Every video is processed via Hive and Thorn to scan for content that requires a content warning or content that should be taken down (e.g. illegal material like CSAM). When you delete a post that contains video, the video will be deleted immediately. Shortly afterwards, the data will be entirely purged from Bluesky infrastructure as well.

Sports, pop culture, politics, breaking news, and so much more just got a lot more exciting on Bluesky! We’re so excited for our community to continue to grow. See you on Bluesky!

Tuesday, 10. September 2024

KuppingerCole

A Glimpse into the 2024 IGA Market Landscape

The IGA market continues to grow, and although at a mature technical stage, it continues to evolve in the areas of intelligence and automation. Today, there still are some organizations either looking at replacements of UAP and ILM or IAG, but most are opting for a comprehensive IGA solution that simplifies deployment and operations and to tackle risks originating from inefficient access governanc

The IGA market continues to grow, and although at a mature technical stage, it continues to evolve in the areas of intelligence and automation. Today, there still are some organizations either looking at replacements of UAP and ILM or IAG, but most are opting for a comprehensive IGA solution that simplifies deployment and operations and to tackle risks originating from inefficient access governance features. The level of identity and access intelligence has become a key differentiator between IGA product solutions. Automation is still the key trend in IGA to reduce management workload by automating tasks, providing recommendations, and improving operational efficiency.

Nitish Deshpande, Research Analyst at KuppingerCole, will discuss the current state of the IGA market, the core capabilities required by IGA solutions as well as the business activities supported by IGA solutions. He will describe our Leadership Compass methodology and process and show some high-level results from the report which was published last month.




Unlocking Success: Praxisorientiertes Rollenmanagement und Berechtigungskonzeptverwaltung im Fokus

IT-Fachleute stehen vor der Herausforderung, komplexe Rollenstrukturen und Berechtigungskonzepte effizient zu verwalten. Die Vielzahl von Einzelrechten und Rollenobjekten erschwert nicht nur die Erstellung, sondern auch die kontinuierliche Anpassung an sich wandelnde Anforderungen im Identitäts- und Zugriffsmanagement (IAM). Zudem müssen Compliance-Anforderungen erfüllt und Änderungen nachvollzieh

IT-Fachleute stehen vor der Herausforderung, komplexe Rollenstrukturen und Berechtigungskonzepte effizient zu verwalten. Die Vielzahl von Einzelrechten und Rollenobjekten erschwert nicht nur die Erstellung, sondern auch die kontinuierliche Anpassung an sich wandelnde Anforderungen im Identitäts- und Zugriffsmanagement (IAM). Zudem müssen Compliance-Anforderungen erfüllt und Änderungen nachvollziehbar dokumentiert werden. Mithilfe moderner Technologien wie zentralisierte Plattformen, Visual Analytics und Workflow-Engines, können die Herausforderungen des Rollenmanagements und der Berechtigungskonzeptverwaltung effektiv angegangen werden.

Schließen Sie sich den IAM-Experten von KuppingerCole Analysts und Nexis an, wie sie die Komplexität der Rollenstruktur, Compliance-Anforderungen und die Notwendigkeit der Nachvollziehbarkeit von Änderungen im IAM bedeutende Herausforderungen darstellen.

Matthias Reinwarth, der Director Practice IAM bei KuppingerCole Analysts, wird die steigende Notwendigkeit eines übergreifenden und wohladministrierten Rollenkonzeptes im Überblick betrachten. Außerdem wird er die besondere Notwendigkeit mit Blick auf die Erfüllungen rechtlicher und regulatorischer Anforderungen darlegen.

Alexander Puchta, Head of Professional Services bei der Nexis GmbH erklärt wie durch standardisierte Ansätze und Integrationen Kunden in die Lage versetzt werden, Best Practices umzusetzen und Compliance-Anforderungen zu erfüllen. Praxisbeispiele verdeutlichen die Anwendbarkeit dieser Lösungen.




Analyst's View: Passwordless Authentication for Enterprises

by Alejandro Leal Driven by the security risks and inconvenience associated with passwords, organizations are increasingly moving towards eliminating them altogether. Passwordless authentication solutions have emerged as a compelling alternative, offering enhanced security features and improved user convenience compared to traditional methods. Although passwordless options have been around for a w

by Alejandro Leal

Driven by the security risks and inconvenience associated with passwords, organizations are increasingly moving towards eliminating them altogether. Passwordless authentication solutions have emerged as a compelling alternative, offering enhanced security features and improved user convenience compared to traditional methods. Although passwordless options have been around for a while, some recent solutions are gaining traction with enterprises and even consumer-facing businesses.

1Kosmos BlockID

Navigating the Complexities of Modern Customer Identity Verification

In an era where identity theft and fraud are rampant, understanding the complexities of customer identity verification is crucial for businesses, especially in the financial sector. This involves meticulous Know Your Customer processes, safeguarding sensitive customer data, and adhering to global regulations to prevent fraudulent activities. Technological advancements such as AI, blockchain, and b

In an era where identity theft and fraud are rampant, understanding the complexities of customer identity verification is crucial for businesses, especially in the financial sector. This involves meticulous Know Your Customer processes, safeguarding sensitive customer data, and adhering to global regulations to prevent fraudulent activities. Technological advancements such as AI, blockchain, and biometrics have revolutionized these processes, ensuring they are more secure and user-friendly.

Understanding KYC (Know Your Customer)

Know Your Customer, commonly called KYC, is a pivotal component of customer identity verification. KYC is a process where businesses verify the identity of their clients and verify the identity documents of customers by ensuring that they are genuine and assessing the potential risks associated with maintaining a business relationship with them. Businesses, particularly in the financial sector, employ KYC procedures to comply with global regulations and prevent fraudulent activities such as money laundering and other identity fraud and theft.
The KYC process includes various stages, such as customer identification documents, customer due diligence, and ongoing monitoring of a customer’s age and transactions. It involves collecting, verifying, and maintaining detailed customer information, including personal details, contact information, and document verification. As a result, KYC helps in creating a secure business environment, fostering trust among clients and businesses.

Data Privacy and Protection

In customer identity verification, data privacy and protection of sensitive information are significant. Safeguarding customer data against unauthorized access and potential breaches is indispensable for maintaining customer trust and regulatory compliance. Businesses must establish robust data protection mechanisms that ensure customer data is stored, processed, and transmitted securely.
Data protection goes beyond the confines of technological safeguards. It encompasses legal and procedural measures, including consent management, data minimization, and adherence to global data protection regulations. In essence, protecting customer data is not merely a technical requirement but a comprehensive approach that integrates technology, legal compliance, and ethical considerations in handling a customer’s identity information.

Verification Process and User Experience

The verification process is a critical juncture where customer experience and security converge. An effective verification method requires customers to ensure the process is streamlined, user-friendly, and secure, balancing stringent security measures and a seamless user experience. Businesses must design intuitive online verification processes, minimizing customer effort and reducing the abandonment rate.
An optimized customer verification process incorporates multiple verification methods, such as document verification, biometric authentication, and two-factor authentication, to ensure compliance and enhance security. Furthermore, it’s imperative to ensure that the customer verification process is agile, adapting to evolving customer needs and emerging security threats. Thus, fostering a verification process that encapsulates user-centricity and security is instrumental in enhancing customer satisfaction and trust.

How Do You Verify Customer Identity? Utilizing AI and ML in Verification

Artificial Intelligence (AI) and Machine Learning (ML) are transformative technologies reshaping the landscape of customer identity verification. AI and ML algorithms can analyze vast datasets, identify patterns, and facilitate real-time decision-making in the identity verification process. These technologies enable automated document verification, both facial recognition and voice recognition, and anomaly detection, enhancing the accuracy and efficiency of identity verification.
By harnessing the power of AI and ML, businesses and financial institutions can automate repetitive tasks, reduce human error, and expedite the verification process of credit information. It allows for the continuous improvement of verification procedures as the algorithms learn and adapt to new patterns and threats, ensuring the verification process remains robust against evolving fraudulent tactics.

Blockchain for Secure Data Storage

Blockchain technology is emerging as a formidable force in securing customer data and enhancing the integrity of identity verification processes. Blockchain allows for the creation of decentralized and immutable ledgers where customer data can be stored securely, mitigating the risks associated with centralized data storage, such as data breaches and unauthorized access.
In a blockchain-based identity verification system, a customer’s identity data is encrypted and stored decentralized, ensuring it is resilient against tampering and unauthorized access. This technology fosters enhanced data integrity and trust, as customers can exercise greater control over their data, and businesses can ensure that the data utilized in the verification and authentication process to verify customers is accurate and unaltered.

Biometrics and Advanced Verification Methods

Biometrics have cemented their place as a cornerstone in advanced identity verification methods. Biometric verification encompasses various modalities of ID verification, such as fingerprint recognition, facial recognition, and voice authentication. These methods leverage individuals’ unique biological and physical characteristics, providing high security and accuracy in identity and verification services.
Employing biometrics in the verification process enhances the user experience by enabling quick and effortless verification of false online identities. Moreover, it bolsters security by ensuring the verified person’s identity corresponds to a live individual, mitigating the risks associated with identity theft and spoofing stolen identities. As biometric technology continues to evolve, it is poised to play an increasingly pivotal role in shaping secure and user-friendly identity verification processes.

Legal and Compliance Aspects
Global Regulatory Framework

Navigating the global regulatory landscape is indispensable in customer identity verification. International regulations and guidelines govern the processes and protocols for verifying customer identities. These regulatory frameworks aim to safeguard customer data, prevent fraudulent activities, and promote a secure digital ecosystem. Adhering to these regulations is paramount for businesses to maintain operational legitimacy and foster customer trust.
These global regulations often mandate stringent KYC (Know Your Customer) verification procedures, Anti-Money Laundering (AML) policies, and robust data protection measures. They necessitate continuous compliance, necessitating businesses to stay abreast of regulatory updates and dynamically align their verification processes to meet evolving compliance standards.

GDPR, CCPA, and Other Data Protection Laws

Prominent data protection regulations such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) are pivotal in shaping customer identity verification processes. These regulations advocate for stringent data protection measures, consent management, and enhanced user control over personal data. Compliance with these laws is imperative to safeguard user data and uphold organizational credibility and brand reputation.
These regulations entail specific provisions regarding collecting, storing, and processing personal data during customer verification. They advocate for data minimization, purpose limitation, and enhanced security measures to prevent unauthorized access and breaches of personal identification. Therefore, understanding and incorporating these legal provisions are crucial for businesses to foster lawful and secure identity verification processes.

Challenges and Solutions in Customer Identity Verification
Balancing Security and User-Friendliness

Creating a verification process that is both secure and user-friendly is a challenge. A robust verification process must ensure that security is maintained, but it should also avoid creating cumbersome processes that may deter users. Simplifying and streamlining the verification process while maintaining high-security standards is crucial for enhancing user satisfaction and trust.
Employing intuitive user interfaces, minimizing the number of required user actions, and using knowledge-based authentication utilizing technologies like biometrics can aid in achieving this balance. Adaptive authentication, which adjusts the level of required verification based on the associated risk, is another approach that can optimize the user experience without compromising security.

Dealing with Fraud and Identity Theft

Fraud and identity theft remain pervasive threats in today’s digital age domain. Crafting verification processes that can robustly counteract these threats is crucial. Techniques such as employing multi-factor authentication, machine learning to detect unnatural patterns, fraud prevention, and continuously updating security protocols can enhance resilience against these challenges.
Cultivating user awareness about potential threats and safe practices is vital. Education and clear communication can empower users to act as a robust first line of defense, recognizing and averting potential security threats before they manifest into breaches.

Future-Proofing Verification Processes

Ensuring that verification processes remain relevant and effective in evolving technological landscapes and emerging threats is essential. Future-proofing involves cultivating a flexible and adaptive verification strategy that swiftly incorporates new technologies, addresses emerging threats, and meets changing regulatory requirements.
Continuous learning, proactive adaptation of new technologies, and fostering a security-centric organizational culture are critical facets of future-proofing verification processes. It involves technological adaptability and strategic foresight to anticipate future trends and challenges, ensuring sustained relevance and effectiveness.

Automate Your Customer Verification Process with 1Kosmos

1Kosmos integrates with the pivotal aspects of customer identity verification, modernizing and securing the customer onboarding process. It revolutionizes KYC (Know Your Customer) by offering self-service identity verification, ensuring customers are authenticated with over 99% accuracy.
1Kosmos ensures a robust and unbiased verification process by utilizing live facial biometrics matched with government-issued credentials. Moreover, it empowers customers with a digital wallet, allowing them to securely transact and share Personally Identifiable Information (PII), enhancing user experience and trust.
Our platform’s emphasis on privacy by design aligns with the global emphasis on data protection. It puts users in complete control of their PII, ensuring enhanced security and compliance with regulations such as GDPR and CCPA.
1Kosmos’ innovative approach, combining biometrics and blockchain technology, enhances the security and efficiency of the customer identity verification process and fosters a user-centric approach, balancing stringent security measures with a seamless user experience.
Beyond refining customer identity verification, 1Kosmos also incorporates added security features like:
1. Biometric-based Authentication: We push biometrics and authentication into a new “who you are” paradigm. 1Kosmos uses biometrics to identify individuals, not devices, through credential triangulation and identity verification.
2. Identity Proofing: 1Kosmos provides tamper evident and trustworthy digital verification of identity – anywhere, anytime and on any device with over 99% accuracy.
3. Privacy by Design: Embedding privacy into the design of our ecosystem is a core principle of 1Kosmos. We protect personally identifiable information in a distributed identity architecture, and the encrypted data is only accessible by the user.
4. Distributed Ledger: 1Kosmos protects personally identifiable information in a private and permissioned blockchain, encrypts digital identities, and is only accessible by the user. The distributed properties ensure no databases to breach or honeypots for hackers to target.
5. Interoperability: 1Kosmos can readily integrate with existing infrastructure through its 50+ out-of-the-box integrations or via API/SDK.
6. Industry Certifications: Certified-to and exceeds requirements of NIST 800-63-3, FIDO2, UK DIATF and iBeta Pad-2 specifications.

To learn more about the 1Kosmos solution, visit the platform capabilities and feature comparison pages of our website.

The post Navigating the Complexities of Modern Customer Identity Verification appeared first on 1Kosmos.


SC Media - Identity and Access

Oktane 2024: Security BEGINS with identity

Join industry leaders and innovators at Oktane 2024 in Las Vegas to explore the future of Identity as the cornerstone of security, redefining how organizations protect every touchpoint in an evolving digital landscape.

Join industry leaders and innovators at Oktane 2024 in Las Vegas to explore the future of Identity as the cornerstone of security, redefining how organizations protect every touchpoint in an evolving digital landscape.


KuppingerCole

KuppingerCole Cybersecurity Council Reflects on the CrowdStrike Incident: Lessons and Future Directions

by Berthold Kerl On September 4, 2024, KuppingerCole’s Cybersecurity Council convened for its third meeting of the year. This council, composed of Chief Information Security Officers (CISOs) from some of Europe’s largest organizations, provides a platform for discussing pressing cybersecurity challenges. This session focused on the July 2024 CrowdStrike incident, which caused widespread disruptio

by Berthold Kerl

On September 4, 2024, KuppingerCole’s Cybersecurity Council convened for its third meeting of the year. This council, composed of Chief Information Security Officers (CISOs) from some of Europe’s largest organizations, provides a platform for discussing pressing cybersecurity challenges. This session focused on the July 2024 CrowdStrike incident, which caused widespread disruption to Windows systems globally, and provided members the opportunity to share their lessons learned and proposed future actions.

The incident, caused by a faulty kernel-level driver, resulted in the crash of around 8 million machines worldwide, particularly affecting systems using BitLocker encryption. John Tolbert, KuppingerCole’s lead analyst, opened the discussion with an analysis of the event, pointing out that insufficient pre-deployment testing and the absence of a phased rollout were key factors in the incident’s scale. Tolbert also presented findings from his recent research into Endpoint Protection, Detection, and Response (EPDR) tools, highlighting the growing complexity and risk that accompanies widespread reliance on these solutions.

The attending CISOs, representing a variety of industries from banking to energy and retail, provided invaluable feedback on how their organizations dealt with the fallout from the CrowdStrike incident. Their experiences offered a wide range of perspectives: from those who directly used CrowdStrike to those impacted by the vulnerabilities of suppliers who relied on it. A key theme that emerged was the importance of improving testing procedures, ensuring stronger controls over software updates, and reinforcing supply chain security practices.

Across the board, CISOs emphasized the importance of Business Continuity Management (BCM). One organization reported that despite having thousands of systems down, their BCM efforts ensured a rapid recovery, with 95% of systems restored within 48 hours. Others, however, encountered significant operational downtime, particularly in sectors reliant on point-of-sale systems. For these organizations, recovery was hampered by complex dependencies on both internal and third-party systems.

Another key insight revolved around insurance and liability issues. CISOs debated the challenges of pursuing insurance claims in incidents where the root cause stems from software vendors rather than cyberattacks. Many organizations are now considering adding technical insurance to their cyber policies, as existing coverages did not account for software-induced outages.

One of the more nuanced discussions concerned the merits of multi-vendor EPDR strategies. While employing multiple security tools may reduce dependence on a single vendor, the increased complexity of managing and integrating different solutions often brings its own risks. Several members expressed concern over this approach, with one noting that a multi-EPDR strategy could cause operational inefficiencies that outweigh the potential benefits.

The session concluded with a focus on key takeaways:

Better Testing and Controlled Rollouts: Vendors must implement more stringent testing protocols and provide customers with better control over update timings to avoid global disruptions. Supply Chain Security: Organizations need to reassess their vendor management strategies, ensuring that service-level agreements (SLAs) clearly define responsibilities during incidents. Incident Communication: Timely and transparent communication with internal teams and external partners is critical in managing the fallout from large-scale incidents like CrowdStrike’s.

The KuppingerCole Cybersecurity Council continues to serve as an essential forum for CISOs to exchange insights and best practices. The next in-person meeting will take place during the cyberevolution 2024 conference, scheduled for December 3-5 in Frankfurt, where members will further explore cutting-edge cybersecurity strategies and enjoy networking opportunities.

This lively session offered valuable insights for council members and showcased the ongoing relevance of collaborative efforts in the cybersecurity space. Through these discussions, the council can drive industry-wide improvements in how security incidents are managed, both for member organizations and the broader public.

Next Meeting: December 3-5, 2024, cyberevolution, Frankfurt.


Indicio

From federated to decentralized identity: Why Verifiable Credentials are the next step in identity management

The post From federated to decentralized identity: Why Verifiable Credentials are the next step in identity management appeared first on Indicio.

By: Helen Garneau

In today’s digital world, identity is at the core of how individuals interact with online services. From accessing email to making online purchases, proving who you are is fundamental.

There are two methods for managing online identities, federated identity and decentralized identity —one legacy, one new — and each takes a different approach to where personal data is stored in order to authenticate an identity. Federated identity, which has dominated identity management for years, relies on centralized data management: personal data is stored in a database and checked against a login and password from a user account, whereas decentralized allows people, organizations, and things to hold their own personal data, and the source and integrity of this data is cryptographically authenticated for identity verification.

We’ll explain this in more detail in a moment, but this distinction — centralized vs decentralized — has profound implications for data privacy and security, and user experience.

Federated Identity: A Step Beyond Centralized Identity

Federated identity systems improve upon traditional centralized digital identity by allowing a single sign-on (SSO) across multiple platforms. Instead of creating separate accounts for each service, users can log in once using a trusted identity provider (IdP) like Google, Facebook, or Microsoft, and access various services. This system offers convenience for both users and service providers, reducing the friction of managing multiple identities.

Federated identity providers get their information directly from users during account creation or from external sources like social media, public records, and other databases. In many cases, businesses rely on these providers to authenticate users, paying for verification services or receiving data in exchange for marketing insights. While this model offers convenience, it has significant drawbacks.

The Drawbacks of Federated Identity

Centralized Control: Even though federated identity reduces the need for multiple login credentials, it still relies on centralized identity providers. These providers act as gatekeepers to online services, standing in the way of an end user and the service they are accessing. This creates a system where a few large enterprises control a vast number of digital interactions. Lack of Privacy: Federated identity providers typically gather extensive amounts of user data, which is then monetized. Users may not be aware of how much data is being shared across services or sold to third parties, leading to privacy concerns. As more services link to federated identities, the amount of shared data can grow exponentially. Single Points of Failure: The reliance on one or two major identity providers can also introduce risk. If a federated identity provider goes offline, or if an account is locked or hacked, users lose access to all associated services. This concentration of control makes federated systems prone to major disruptions when something goes wrong. Data Breaches: Federated systems, though more distributed than centralized identity models, still centralize sensitive data within the hands of a few large corporations. As history has shown, these providers are frequent targets for hackers, making them vulnerable to large-scale breaches that compromise millions of users at once.

Decentralized Identity: A User-Centric Solution with Verifiable Credentials

Decentralized identity, flips the traditional centralized model on its head. Instead of relying on centralized authorities to manage identity collected from third-parties, decentralized identity systems give individuals control over their own data.

How does this work? It’s a two-step process. First, a global standard from the World Wide Web Consortium (W3C) allows people and organizations to create decentralized identifiers (DIDs), which they can cryptographically prove they control. Then, using these DIDs, they can add digital credentials that contain relevant identity information—like a government ID, bank account, or passport which make it easy to present their information digitally to be verified by other entities, independently, without intervention from federated systems.

Verifiable Credentials are a special type of digital credential that offer a powerful and efficient way to issue, share, and verify important data. What sets them apart is that the data is digitally signed by the trusted issuer, ensuring its origin and authenticity can be instantly verified using simple software—without needing logins, passwords, or checking against a database. Since you hold your own data, you can choose when to share it, solving a key issue in data privacy regulation: lack of consent. Plus, some Verifiable Credentials let you selectively share only the necessary information or use privacy-preserving features. And if anyone tries to alter the credential after it’s issued, the change is easy to spot during verification.

The combination of DIDs and Verifiable Credentials means that you can always be certain of the source of a credential and that the data in the credential hasn’t been altered.

The Advantages of Decentralized Identity with Verifiable Credentials

User Control and Privacy: In a decentralized identity system, individuals have full control over their credentials. They decide which pieces of information to share and with whom. This is in contrast to federated identity, where large identity providers mediate these transactions. Decentralized identity systems enable self-sovereign identity (SSI), meaning users have complete autonomy over their personal data. Improved Privacy through Selective Disclosure: Verifiable Credentials allow for selective disclosure, where users can prove certain facts (like being over 18) without revealing unnecessary information (like a full birthdate). This significantly enhances privacy and minimizes the sharing of personal data compared to federated identity systems, where often more information than necessary is shared across services. No Single Point of Failure: Unlike federated identity, decentralized identity doesn’t rely on any single provider. This dramatically reduces the risk of losing access to services in the event of an account compromise or a provider outage. The use of distributed ledger technology means there is no central database that can be breached, making decentralized identity systems inherently more secure. Persistent Identity: When a credential issuer writes the metadata for a credential to be read to a distributed ledger, the actual identity it supports cannot be taken away. The immutability of data written to a distributed ledger means that a Verifiable Credential can always be verified. Important to note — only metadata for the credential, the data to perform cryptography, is written to the ledger. No personal data goes on the ledger. Added Security: When you don’t have to store personal data on a database to manage identity authentication and access, it can’t be stolen. It’s as simple as that. Another huge benefit — you can access accounts or systems without having to use passwords. And if you want the ultimate in security, you can issue biometrics as Verifiable Credentials. This means that when a person performs a biometric scan, they simultaneously present a biometric template in a Verifiable Credential, and the scan is compared with the template. This effectively binds biometric data to a person and can be used to prevent generative AI deepfakery. Efficiency and Convenience: While federated identity simplifies login processes by allowing users to access multiple services with one account, decentralized identity goes even further. Once verifiable credentials are issued, they can be reused across different services without having to rely on a third-party identity provider for each transaction. This speeds up verification processes and reduces reliance on external parties.

Why Decentralized Identity and VCs Are the Future

Decentralized identity, powered by verifiable credentials, represents a paradigm shift in how we manage identity online. By addressing the security, privacy, and efficiency challenges inherent in centralized and federated systems, decentralized identity offers a more robust solution that traditional identity systems cannot match. By eliminating the need for centralized identity providers and reducing the risk of data breaches, decentralized identity systems offer a more secure and private way to manage digital identities. Moreover, they deliver a more seamless and user-friendly experience by enabling users to reuse credentials across services without intermediaries.

In an increasingly interconnected world, decentralized identity and VCs pave the way for a more secure, private, and user-centric digital future.

Visit Indicio for more information on decentralized identity and verifiable credentials. Or contact us to find out how your organization can boost your digital identity programme.

###

Suggested reading:

Beginners guide

What are Verifiable Credentials? (With Pictures!)

What is DIDComm? (With Pictures!)

How verifiable credentials disrupt online fraud, phishing, and identity theft

 

 

 

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post From federated to decentralized identity: Why Verifiable Credentials are the next step in identity management appeared first on Indicio.


Ocean Protocol

Predictoor Benchmarking: 180-Day Profitability of Linear Classifiers

Benchmarking seven different linear classifier models to determine the best one for Predictoor & trader profits Summary This benchmarking blog post tries to answer the question, “Which linear classifier model makes the most $”? So we benchmarked all seven linear classifier Predictoor models over 180 days (50k 5min candle iterations), to show absolute value profitability. This time frame
Benchmarking seven different linear classifier models to determine the best one for Predictoor & trader profits Summary

This benchmarking blog post tries to answer the question, “Which linear classifier model makes the most $”? So we benchmarked all seven linear classifier Predictoor models over 180 days (50k 5min candle iterations), to show absolute value profitability. This time frame is 10x longer than that of previous blog post benchmarks, which were only 18 days (5k iterations). Therefore, a 180-day time frame better shows an absolute value profit which helps determine the best models for Predictoor & trader bots.

Predictoor Profit vs Time for the most successful model, ClassifLinearRidge with None calibration

Over the 180-day term, Predictoor profit fluctuates where some 18-day periods make $, lose $, or remain relatively flat. This is demonstrated in the plot above of Predictoor Profit vs Time for the ClassifLinearRidge model with None calibration, the most successful model for Predictoor profit.

That’s why it’s important to benchmark over longer time frames (180 days+) rather than just short time frames if we want to understand absolute value profitability. Nonetheless, short 18-day benchmarks are useful to compare relative performance of one model vs another (but one cannot draw definitive conclusions about profitability).

Trader Profit vs Time for ClassifLinearRidge with None calibration

A plot of Trader Profit vs Time for the same model also shows how 18-day periods within a 180-day time frame, will either make $, lose $, or remain flat.

This blog post benchmarks Ocean Predictoor simulations for all the Predictoor linear classifier models: ClassifLinearLasso, ClassifLinearLasso_Balanced, ClassifLinearRidge, ClassifLinearRidge_Balanced, ClassifLinearElasticNet, ClassifLinearElasticNet_Balanced, and ClassifLinearSVM. Each implementation is compared with three different calibrations.

This blog post then proceeds to do a walk-through of each of the benchmark plots for predictoor/trader profit, and comparisons of the models & their calibrations.

1. Introduction 1.1 What is Ocean Predictoor?

For information about Ocean Predictoor, please refer to the Predictoor Series blog post that catalogs all the blog posts, articles, and talks related to Predictoor. Learn about ML concepts such as classification, L1 & L2 regularization, calibration, and Predictoor’s simulation tool (“pdr sim”) and (“pdr multisim”) in the Regularized Linear Classifiers With Calibration blog post. Learn about ML balancing in the blog post, The Effects of Balancing on Calibrated Linear Classifiers.

1.2 Benchmarks Outline

We run benchmarks on the approaches:

ClassifLinearLasso & ClassifLinearLasso_Balanced — L1 Regularization. ClassifLinearRidge & ClassifLinearRidge_Balanced— L2 Regularization. ClassifLinearElasticNet & ClassifLinearElasticNet_Balanced — L1 & L2 Regularization. ClassifLinearSVM— L2 Regularization.

The models are benchmarked with the same three calibration approaches, None, Isotonic, and Sigmoid, as in the Linear SVM Classifier with Calibration blog post.

1.3 Experimental Setup

The models were trained on BTC-USDT & ETH-USDT data from Jan 1, 2024 to July 15, 2024. All other experimental parameters, defined in the my_ppss.yaml file, are the same as in the previous blog post, The Effects of Balancing on Calibrated Linear Classifiers.

2. 180-Day Profits of ClassifLinearLasso Balanced & Unbalanced

The ClassifLinearLasso & ClassifLinearLasso_Balanced models are implemented with scikit-learn’s LogisticRegression() function with parameters for a linear kernel trick & L1 penalty.

2.1 ClassifLinearLasso (Unbalanced) 2.1.1 Predictoor Profit

Max Predictoor Profit: 26,545.88 OCEAN

Calibration: None Max_n_train: 2000 Autoregressive_n: 2 2.1.2 Trader Profit

Max Trader Profit: $432.36 USD

Calibration: Isotonic Max_n_train: 1000 Autoregressive_n: 2 2.1.3 Analysis

The ClassifLinearLasso model made moderate Predictoor and trader profits in the 180 days. However, the model maximized Predictoor profit better than trader profit — it generated the third best Predictoor profit of all the benchmarks.

2.2 ClassifLinearLasso_Balanced 2.2.1 Predictoor Profit

Max Predictoor Profit: 6,915.77 OCEAN

Calibration: Sigmoid Max_n_train: 1000 Autoregressive_n: 2 2.2.2 Trader Profit

Max Trader Profit: $639.00 USD

Calibration: Isotonic Max_n_train: 1000 Autoregressive_n: 1 2.2.3 Analysis

While not excelling in Predictoor profits, ClassifLinearLasso_Balanced shows that balancing can improve trader profit returns. This model could be useful where stability and moderate trader returns are desired.

3. 180-Day Profits of ClassifLinearRidge Balanced & Unbalanced

The ClassifLinearRidge & ClassifLinearRidge_Balanced models are implemented with scikit-learn’s LogisticRegression() function with parameters for a linear kernel trick & L2 penalty.

3.1 ClassifLinearRidge (Unbalanced) 3.1.1 Predictoor Profit

Max Predictoor Profit: 41,790.77 OCEAN

Calibration: None Max_n_train: 2000 Autoregressive_n: 2 3.1.2 Trader Profit

Max Trader Profit: $619.43 USD

Calibration: Isotonic Max_n_train: 1000 Autoregressive_n: 1 3.1.3 Analysis

The ClassifLinearRidge model produced the highest Predictoor profit of all the linear classifier model benchmarks. Therefore, it is a good candidate for running with a Predictoor bot over 180 days. It also generated moderate trader profits with Isotonic calibration.

3.2 ClassifLinearRidge_Balanced 3.2.1 Predictoor Profit

Max Predictoor Profit: 12,811.63 OCEAN

Calibration: Sigmoid Max_n_train: 5000 Autoregressive_n: 2 3.2.2 Trader Profit

Max Trader Profit: $897.49 USD

Calibration: None Max_n_train: 2000 Autoregressive_n: 2 3.2.3 Analysis

The ClassifLinearRidge_Balanced model demonstrates strong trader profitability as balancing appears to have boosted trader profits compared to the ClassifLinearRidge model. Interestingly, the model did not need calibration to achieve its large trader profit, whereas previous benchmarks show Isotonic calibration best maximized trader profits.

4. 180-Day Profits of ClassifLinearElasticNet Balanced & Unbalanced

The ClassifLinearElasticNet & ClassifLinearElasticNet_Balanced models are implemented with scikit-learn’s LogisticRegression() function with parameters for a linear kernel trick, L1 & L2 penalties.

4.1 ClassifLinearElasticNet (Unbalanced) 4.1.1 Predictoor Profit

Max Predictoor Profit: 39,109.21 OCEAN

Calibration: None Max_n_train: 2000 Autoregressive_n: 2 4.1.2 Trader Profit

Max Trader Profit: $551.87 USD

Calibration: Isotonic Max_n_train: 1000 Autoregressive_n: 1 4.1.3 Analysis

The ClassifLinearElasticNet model generated the second highest Predictoor profit of the benchmarked models, second only to the ClassifLinearRidge model. Thus, L2 regularization appears to have made both models more accurate than the rest.

4.2 ClassifLinearElasticNet_Balanced 4.2.1 Predictoor Profit

Max Predictoor Profit: 12,709.23 OCEAN

Calibration: Sigmoid Max_n_train: 2000 Autoregressive_n: 2 4.2.2 Trader Profit

Max Trader Profit: $1,172.21 USD

Calibration: None Max_n_train: 2000 Autoregressive_n: 1 4.2.3 Analysis

The ClassifLinearElasticNet_Balanced model achieved the highest trader profit among all benchmarks. As in the other benchmarks, balancing appears to have boosted trader profits. None calibration produced the best trader profit, suggesting that adding classifier calibration to balancing may cause overfitting.

5. 180-Day Profits of ClassifLinearSVM

The ClassifLinearSVM model is implemented with scikit-learn’s LinearSVC() function with parameters for a linear kernel trick and regularization C value of 0.025 (the strength of the regularization is inversely proportional to C).

5.1 Predictoor Profit

Max Predictoor Profit: -162,610.90 OCEAN

Calibration: Sigmoid Max_n_train: 1000 Autoregressive_n: 2 5.2 Trader Profit

Max Trader Profit: $520.30 USD

Calibration: Isotonic Max_n_train: 1000 Autoregressive_n: 1 5.3 Analysis

The ClassifLinearSVM model generated significant losses in Predictoor profit, so it is not recommended for use with a Predictoor bot for 180 days. However, it is possible that tuning the model’s regularization parameter could improve profitability. The model managed to generate moderate trader returns with Isotonic calibration.

6. Analysis and Summary

Which linear classifier model makes the most $?

6.1 Predictoor Profit Analysis

The best Predictoor profit was gained by ClassifLinearRidge model. It gained 41,790.77 OCEAN over the 180 day term with None calibration, max_n_train = 2000, and autoregressive_n = 2. The next best model for Predictoor profitability was ClassifLinearElasticNet. Benchmarks for ClassifLinearSVM model were very poor, losing more than 162k OCEAN during the 180 days. Thus, it should not be used for a Predictoor bot for 180 day terms.

6.2 Trader Profit Analysis

The best trader profit was gained by ClassifLinearElasticNet_Balanced model. It profited $1,172.21 USD with None calibration, max_n_train = 2000, and autoregressive_n = 1. The next best model for trader profitability was ClassifLinearRidge_Balanced.

6.3 Benchmark Trends

Benchmarks show that balancing improved trader profits, especially when paired with L2 regularization, but balancing also reduced Predictoor profits. All the L2 regularized logistic regression models performed best both in Predictoor & trader profit.

6.4 Benchmark Summary

Here’s the breakdown of the best absolute value profitabilities for all seven linear classifier Predictoor models.

ClassifLinearLasso
Max Predictoor profit: 26545.88 OCEAN, calibration = None, max_n_train = 2000, autoregressive_n = 2
Max trader profit: $432.36 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 2

ClassifLinearLasso_Balanced
Max Predictoor profit: 6915.77 OCEAN, calibration = Sigmoid, max_n_train = 1000, autoregressive_n = 2
Max trader profit: $639.00 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 1

ClassifLinearRidge
Max Predictoor profit: 41790.77 OCEAN, calibration = None, max_n_train = 2000, autoregressive_n = 2
Max trader profit: $619.43 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 1

ClassifLinearRidge_Balanced
Max Predictoor profit: 12811.63 OCEAN, calibration = Sigmoid, max_n_train = 5000, autoregressive_n = 2
Max trader profit: $897.49 USD, calibration = None, max_n_train = 2000, autoregressive_n = 2

ClassifLinearElasticNet
Max Predictoor profit: 39109.21 OCEAN, calibration = None, max_n_train = 2000, autoregressive_n = 2
Max trader profit: $551.87 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 1

ClassifLinearElasticNet_Balanced
Max Predictoor profit: 12709.23 OCEAN, calibration = Sigmoid, max_n_train = 2000, autoregressive_n = 2
Max trader profit: $1172.21 USD, calibration = None, max_n_train = 2000, autoregressive_n = 1

ClassifLinearSVM
Max Predictoor profit: -162610.90 OCEAN, calibration = Sigmoid, max_n_train = 1000, autoregressive_n = 2
Max trader profit: $520.30 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 1

7. Conclusion

We benchmarked the absolute value profitability of seven Predictoor linear classifier models over 180 days. The best model for maximizing Predictoor profit was ClassifLinearRidge. It gained 41,790.77 OCEAN over the 180 day term with None calibration, max_n_train = 2000, and autoregressive_n = 2 tunings. The next best model for Predictoor profitability was ClassifLinearElasticNet. The benchmarks also found that the ClassifLinearSVM model was highly negative in Predictoor profitability, losing more than 162k OCEAN over the time frame.

The best model for maximizing trader profit was ClassifLinearElasticNet_Balanced. It profited $1,172.21 USD with None calibration, max_n_train = 2000, and autoregressive_n = 1 tunings. The next best model for trader profitability was ClassifLinearRidge_Balanced.

Throughout the benchmarks, balancing appeared to improve trader profits, especially when paired with L2 regularization, but simultaneously reduced Predictoor profits. Given that the top 2 Predictoor profit models were ClassifLinearRidge & ClassifLinearElasticNet and the top 2 trader profit models were ClassifLinearElasticNet_Balanced & ClassifLinearRidge_Balanced, it appears that L2 regularization of the linear logistic regression models helped to generate the best profits.

8. Appendix: Tables 8.1 ClassifLinearLasso

Max Predictoor profit: 26545.88 OCEAN, calibration = None, max_n_train = 2000, autoregressive_n = 2

Max trader profit: $432.36 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 2

8.2 ClassifLinearLasso_Balanced

Max Predictoor profit: 6915.77 OCEAN, calibration = Sigmoid, max_n_train = 1000, autoregressive_n = 2

Max trader profit: $639.00 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 1

8.3 ClassifLinearRidge

Max Predictoor profit: 41790.77 OCEAN, calibration = None, max_n_train = 2000, autoregressive_n = 2

Max trader profit: $619.43 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 1

8.4 ClassifLinearRidge_Balanced

Max Predictoor profit: 12811.63 OCEAN, calibration = Sigmoid, max_n_train = 5000, autoregressive_n = 2

Max trader profit: $897.49 USD, calibration = None, max_n_train = 2000, autoregressive_n = 2

8.5 ClassifLinearElasticNet

Max Predictoor profit: 39109.21 OCEAN, calibration = None, max_n_train = 2000, autoregressive_n = 2

Max trader profit: $551.87 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 1

8.6 ClassifLinearElasticNet_Balanced

Max Predictoor profit: 12709.23 OCEAN, calibration = Sigmoid, max_n_train = 2000, autoregressive_n = 2

Max trader profit: $1172.21 USD, calibration = None, max_n_train = 2000, autoregressive_n = 1

8.7 ClassifLinearSVM

Max Predictoor profit: -162610.90 OCEAN, calibration = Sigmoid, max_n_train = 1000, autoregressive_n = 2

Max trader profit: $520.30 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 1

Predictoor Benchmarking: 180-Day Profitability of Linear Classifiers was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Lockstep

It’s safe to assume AIs can at least read. Isn’t it?

What do you think Large Language Models do? It’s easy to think LLMs think. Anthropomorphism is literally a force of nature. Human beings have evolved with a “Theory of Mind” to help us act more effectively with other conscious beings (I think there might be a better term somewhere for “Theory of Mind”; after all,... The post It’s safe to assume AIs can at least read. Isn’t it? appeared first on

What do you think Large Language Models do?

It’s easy to think LLMs think. Anthropomorphism is literally a force of nature. Human beings have evolved with a “Theory of Mind” to help us act more effectively with other conscious beings (I think there might be a better term somewhere for “Theory of Mind”; after all, it’s more a cognitive faculty than a “theory”).

It’s a powerful instinct. And, like other instincts that evolved for a simpler life on the savannah, Theory of Mind can tend to over-do things. It can lead us to intuit, falsely, that all sorts of things are alive (anyone remember the Pet Rock craze?) It seems Theory of Mind leads to “psychological illusions” just as our pre-wired visual cortex leads to optical illusions when we hit it with unnatural inputs. And so some people go so far as to feel that LLMs are sentient.

But most of us are probably wise to the impression that AIs give of being life-like.

So, what do LLMs really do?

Surely it’s safe to presume that a Large Language Model can at least read? I mean, their very name suggests that LLMs have some kind of grasp of language. Any fool can see they ingest text, interpret it and describe what it means. So that means they’re reading, right?

Well, no, AIs don’t even do that.

Check out this short explainer by the wonderful @albertatech on Instagram, of a howler made by all LLMs when asked “How many Rs are in the word strawberry?”.

Peoples’ mental models of AI are hugely important. The truth is that AIs lack anything even close to self-awareness. They cannot reflect on the things they generate and why. They have no inner voice that applies common sense to filter right and wrong, much less a conscience to sort good and bad. This makes AIs truly alien creatures, despite their best impressions.

Their failure modes are not even random (with apologies to Wolfgang Pauli). Society has no institutional mechanisms to deal with AIs’ deeply weird failures and yet we’re letting them drive on our public roads.

We casually talk about AIs “reading” and “writing”. We see them “seeing”; we interpret their outputs as “interpretations”.

These are all metaphors, and they’re wildly misleading.

The post It’s safe to assume AIs can at least read. Isn’t it? appeared first on Lockstep.


KuppingerCole

Cloud Security - Problem Solved? No!

by Osman Celik Cloud computing is an essential tool for organizations of all sizes, from small businesses to large enterprises. However, as cloud adoption continues to accelerate, securing cloud environments has always remained a major challenge. Today, organizations still face significant difficulties in protecting their data and resources in the cloud. One of the main reasons is the complexity

by Osman Celik

Cloud computing is an essential tool for organizations of all sizes, from small businesses to large enterprises. However, as cloud adoption continues to accelerate, securing cloud environments has always remained a major challenge. Today, organizations still face significant difficulties in protecting their data and resources in the cloud. One of the main reasons is the complexity of cloud environments and the shared responsibility model, which distributes security duties between the cloud provider and the user. Many organizations still struggle to understand where their cloud security responsibilities begin and end. The lack of clarity continues to leave cloud environments exposed to a wide range of vulnerabilities.

Organizations that operate in highly regulated industries, such as healthcare, finance, and government, are particularly vulnerable to cloud security challenges. These sectors deal with large amounts of sensitive data, such as personal information, financial records, and healthcare data. This makes them the prime targets for cybercriminals. Additionally, these industries face strict regulatory requirements that further complicate their cloud adoption. While larger organizations may have the resources to invest in advanced tools and hire experts, some small and medium-sized enterprises (SMEs) face challenges in implementing necessary security measures due to limited resources.

Cloud Security Challenges in 2024

In 2024, challenges like data breaches, misconfigurations, insider threats, regulatory compliance issues, third-party risks, and insufficient identity and access management (IAM) continue to be the top cloud security concerns for organizations. Data breaches remain one of the most significant risks because of the high volume of sensitive data stored in the cloud. Attackers can easily exploit weak security measures and vulnerabilities to gain unauthorized access to confidential data. Misconfigurations, such as exposing databases to the public without proper encryption, are also common and frequently result in massive data leaks.

The complexity of cloud environments contributes to the human factor, which in turn leads to insider threats, as employees may overlook some of the critical security measures. Whether intentional or accidental, insiders can cause severe damage by accessing sensitive data, misusing credentials, or exposing systems to cybercriminals. Regulatory challenges add another layer of complexity, as organizations must comply with regional and/or global compliance requirements, such as the General Data Protection Regulation (GDPR), Payment Card Industry Data Security Standard (PCI-DSS), or the Health Insurance Portability and Accountability Act (HIPAA). Ensuring regulatory compliance in cloud environments can be resource intensive and expensive. As many organizations depend on external vendors and cloud service providers to handle critical parts of their infrastructure, they are also often exposed to third-party risk. When one of these third parties is compromised, it can lead to security incidents across the entire ecosystem.

Lack of adequate IAM practices increases the risk of security breaches in cloud environments, given the role of managing user access to the resources. Weak IAM policies lead to unauthorized access and allow attackers to exploit accounts and passwords. Lack of multi-factor authentication (MFA) also poses a risk of intrusions into cloud systems. These IAM-related vulnerabilities highlight the need for organizations to enforce strict access controls and regularly audit user permissions to ensure they are in line with the principle of least privilege.

The Financial Impact of Security Incidents is Alarming

According to IBM's 2024 "Cost of a Data Breach" report, the global average cost of a data breach in the cloud was $4.88 million per incident, with the healthcare industry experiencing the highest average costs at $9.77 million per breach. Additionally, misconfigurations were estimated to have cost organizations over $3.18 trillion in 2023, due to the combined expenses of lost revenue, remediation efforts, and regulatory fines. These figures highlight the financial impact that cloud security failures can impose.

Hybrid Cloud is still an Option

Cloud security concerns are still a significant factor preventing some organizations from fully embracing cloud technology. While many businesses recognize the benefits of moving to the cloud, security concerns often lead to delayed adoption of cloud systems. In some cases, organizations delay cloud migration or implement hybrid solutions. Such organizations often store critical data on-premises while only shifting non-sensitive data to the cloud. This approach allows them to maintain greater control over their most valuable assets but limits the full potential of cloud-based innovation.

Enhance Your Cloud Protection through Advanced Security Strategies

With employees and devices accessing cloud resources from anywhere, Zero Trust assumes that threats could arise both inside and outside the network. The Zero Trust model enforces a "never trust, always verify" approach, ensuring that all users, devices, and applications are continuously authenticated and authorized before accessing resources.

AI and ML automate threat detection, analysis, and response actions. These technologies can also process enormous volumes of data in real-time, enabling security systems to detect anomalies and malicious activities much faster than human analysts. By learning from patterns in cloud traffic and user behavior, AI and ML can anticipate potential cloud security threats and act proactively. However, these technologies are not risk free. Attackers can also use them to launch more advanced attacks that learn how to bypass security systems.

Automated compliance management tools facilitate the monitoring of cloud environments, generate compliance reports, and alert users to any potential violations. These solutions reduce the manual effort required for audits and ensure that organizations stay up to date with changing regulatory standards.

Cloud Security Posture Management (CSPM) solutions address misconfigurations and maintain strong security hygiene across cloud environments. CSPM tools monitor cloud configurations to identify risks such as exposed storage buckets, insecure firewall settings, or overly permissive access controls. Misconfigurations are one of the most common causes of cloud security breaches, and CSPM helps organizations detect and remediate these issues before they can be exploited. As more organizations adopt multi-cloud or hybrid cloud strategies, CSPM provides the visibility and control needed to secure these complex environments.

We are Back in Town - cyberevolution 2024

We are excited to invite you to our cyberevolution event in Frankfurt am Main on December 3-5, 2024. We will be exploring a wide range of cybersecurity topics, with plenty of chances to chat with industry experts. Cloud Security will be one of the big topics on the agenda.

Here are some sessions that might catch your interest:

Cloud Application Security from CNAPP to AINAPP The Cloud Conundrum: Balancing Agility with Security Security at Scale - Mastering Cloud Security in the Cyberwar Era

You can also check out our published Leadership Compasses below:

Leadership Compass – Zero Trust Network Access (ZTNA) Leadership Compass – Cloud Security Posture Management (CSPM) Leadership Compass – Cloud Native Application Protection Platforms (CNAAP)

Lockstep

Money, the Metaverse and David Birch (Making Data Better EP15)

George and I had a virtual blast recently on our podcast with David Birch. As an adviser and global raconteur in payments, identity and digital transformation, Dave needs little introduction. With Meeco COO Victoria Richardson, he has just co-authored a fascinating book, Money in the Metaverse: Digital assets, online identities, spatial computing and why virtual... The post Money, the Metaverse

George and I had a virtual blast recently on our podcast with David Birch. As an adviser and global raconteur in payments, identity and digital transformation, Dave needs little introduction. With Meeco COO Victoria Richardson, he has just co-authored a fascinating book, Money in the Metaverse: Digital assets, online identities, spatial computing and why virtual worlds mean real business.

Dave took us into their thinking about secure, private transactions in the metaverse(s).

Virtual money makes the virtual world go around

Dave was drawn to write a new book after finding it strangely clunky to pay for things in at least one virtual world.

He told us about being at an industry event with lots of people “walking around as avatars and meeting each other”. That all seemed real enough until he wanted to buy something. He had to come out of the metaverse and undergo an all-too-real payment rigmarole—scanning a QR code, then another website, typing in card details—before he could rejoin the virtual fun.

Surely, he thought, “I should be doing things inside the metaverse instead of taking off my VR glasses!”. He enlisted Victoria as co-author, who he describes as a “brilliant digital strategist” with a proper framework for thinking about these things.

The state of the art in self-contained metaverse commerce is all about DeFi, Web 3, tokenisation and cryptocurrency.  Loudly sceptical about these things IRL, Dave says “there’s absolutely no doubt” they will form “the next generation financial market infrastructure”.

Dave has an optimistic and generous view of the metaverse. “It’s early days” (of course) yet he is confident that the metaverse’s many pioneers will continue to refine and innovate and surprise us, taking AR/VR technology in new directions.

He likens Apple’s Vision Pro headset to the Apple Newton of the late 1990s. It wasn’t attractive to typical consumers either, but over time, everyone saw that the Newton was the prototype iPad.  So who’s to say where the Vision Pro will lead?

And I should add that Dave does not think $3,000 for a Vision Pro is unreasonable.

In this blog, I’m going to go deep once more on authenticity in the metaverse (I’ve previously looked at how the metaverse should force a rigorous re-examination of digital identity).

But first, here’s a sample of the areas George and I covered with Dave (don’t forget to take a listen):

In less than 45 minutes, we traversed gaming, brand marketing, car insurance, banking, newspapers and print media, comedy, concert tickets, adult services, COVID, teenage mental health, and virtual girlfriends and boyfriends. Digital says Dave is “the natural UX for young people today. It’s how they meet their friends, how they socialize, how they connect. So, in a very short time, brands are going to need to be in those spaces as well.” On ownership and tokenisation: “[The] amount of effort that’s already going into the proto-metaverses is substantial, but it’s hamstrung by the fact that the things that they build aren’t theirs. They belong to the platform.” On economics, in-built platform security is such an imperative that Dave and Victoria see virtual worlds as potentially safer and more efficient than the real world. As a result, transaction costs will fall, and businesses in all sectors will feel pressure to move into the metaverse. Real authenticity

When we turned to authenticity, Dave set the scene as follows:

“Of course, in the metaverse nothing’s real, putting to one side what real means … we certainly don’t want the metaverse to end up in the mess that we’re in at the moment with the internet where we see fake [TV personalities] shilling cryptocurrency”.

Cryptographic security must be “part of the warp and weft” of a new infrastructure, in a way that we simply overlooked in the rush to Web 1 and Web 2.  Dave points out that a whole “panoply of keys, key generation, certificates, digital signatures and encryption” was missing from the internet.  He is a forceful champion of security being inherent to the infrastructure; on this point he calls himself a “maximalist”.

What would such security look like? Well, we might not even notice it. Crucially, Dave does not imagine us having to prove our bona fides by showing pictures of virtual driver licences. I agree; it would be moronic to simulate a superficial verification process when it is so bad in real life.

Instead, Dave foresees metaverse platforms just knowing your authorisation attributes and applying them to covertly regulate your virtual experience. So, if for example you’re not 18 years old and you approach an age-restricted venue or event, then you won’t even have the option of going in.

“In any metaverse I’d want to take part in, if a photo doesn’t have a digital signature that says ‘this comes from the New York Times’ or ‘from George Peabody’, I don’t want to even see it.”

So, one crucial distinction he sees between the metaverse and any virtual world built so far on the internet, is that authenticity will be part of the infrastructure.

In a sense, everything in a Dave Birch metaverse will be real!

Questions

A simulated world in which everything we see is true could save digital civilisation. But we need to approach any Utopia with caution.

What’s real in an unreal world? What is truth? If the answer is everything’s relative, then authenticity will need to be configurable.

Beauty is in the eye of the beholder, and authenticity in the metaverse needs to be in the hands of the beholder as well.

The point of the metaverse is to shift reality. If users have any freedom to adjust what’s real, then they will need to set their own authenticity standards. I might for example be able to have the BBC determine what political stories are true as far as I am concerned and have New Yorker film critics control my cinema experience.

Inevitably, beneath any metaverse, are the unseen platforms. As we discussed with Dave, platforms have had most of the control so far. Dave calls for a shift in control and asset ownership from landlords to denizens.

There are many privacy issues. If a metaverse platform knows my personal attributes and applies them to shape my virtual experience (such as removing pubs and clubs from my experience if I am under-age) then the platform must be watching what I am trying to do around the clock.

I guess that’s a price users could pay for the seamlessness of having the world “know” them without having to see a virtual ID card. That trade-off might be perfectly fine—if we trust the platforms, and/or they closely regulated.

If metaverses even come close to mimicking the richness of the real world, the platforms will have unprecedented executive control over our activity. They will literally direct what we experience and even how we behave, because the platforms’ software will mediate our very existence in the worlds.

Is the metaverse going to need benign meta-dictators?

More on Money in the Metaverse

Reviewed by Irish Tech News, May 2, 2024.

Dave was interviewed on the Pay it Forward podcast, June 28, 2024.

Victoria and Dave were interviewed on The Banker, July 10, 2024.

 

 

The post Money, the Metaverse and David Birch (Making Data Better EP15) appeared first on Lockstep.


IDnow

IDnow’s YRIS solution obtains Substantial Level of Assurance for digital identities according to eIDAS

With the latest certification of French Cybersecurity Agency (ANSSI), YRIS is now eligible to be featured on FranceConnect+ Munich/Rennes, September 10, 2024 – IDnow, a leading identity verification platform provider in Europe, has received the security Visa from French Cybersecurity Agency (ANSSI) recognizing the Substantial Level of Assurance (LoA) certification for digital identities for its […]
With the latest certification of French Cybersecurity Agency (ANSSI), YRIS is now eligible to be featured on FranceConnect+

Munich/Rennes, September 10, 2024 – IDnow, a leading identity verification platform provider in Europe, has received the security Visa from French Cybersecurity Agency (ANSSI) recognizing the Substantial Level of Assurance (LoA) certification for digital identities for its YRIS digital identity wallet. The LoA is defined by the European eIDAS regulation (electronic Identification, Authentication and Trust Services) and was certified by the Agence nationale de sécurité des systèmes d’information (ANSSI).

Seamless reuse of verified digital identity credentials

YRIS was first launched in June 2022 and allows the seamless reuse of verified digital identity credentials. It enables users to easily and securely prove their identity without having to scan a physical ID document and their face each and every time access to a service is needed. The strength of YRIS also lies in the fact that it allows all French citizens to create this digital identity based on the old French national ID card, the new national ID card, and the residence permit.

Today, more than 450,000 users in France are using YRIS in their day-to-day lives via FranceConnect, the national digital identity federator, where users authenticate or identify themselves for eGovernment and other regulated services in France. The new certification also qualifies YRIS to be featured on FranceConnect+, and would thus make another digital identity provider available on the platform.

FranceConnect+ is similar to FranceConnect but its Substantial LoA provides an eIDAS node that will permit mutual recognition of French citizens on services in other European Union member states with their French digital identity. It can be used to carry out administrative procedures with more stringent user identification requirements, such as using training credits, obtaining subsidies, etc. It can also be used to generate qualified electronic signatures, to send or receive electronic registered mail, and to meet identification requirements for financial transactions subject to AML-CTF regulations.

Authentication and verification in financial services, insurance, HR sectors and electronic registered mail

Besides possible integration on FranceConnect+, YRIS can also be used for proof of identity and as a secured method of strong authentication in the financial or insurance industries, and in human resources. Several use cases, such as financial account opening, insurance contracts, loans or rental agreements, can now be processed via YRIS thanks to the new Substantial LoA. Based on the eIDAS regulation, YRIS can also be used by providers of electronic registered mail services as a compliant method for identifying the recipient, a promising market for mail replacement.

“This certification is the latest company milestone for IDnow, which remains committed to playing a key role in Europe’s ambition to create and offer a single, reliable and secure digital identity to its citizens and residents,” says Marc Norlain, Managing Director and Head of the Reusable Identities Unit at IDnow.

“With their reusable digital identities, end users in France will be able to open a bank account or carry out any banking operation, perform a qualified electronic signature, open an online gaming account, or send or receive an electronic registered letter. We are at a pivotal moment in the digital identity ecosystem in France and Europe overall and IDnow is proud to lead the way with our expertise and our proven solutions.”


Veridium

Veridium Joins IGEL at Disrupt 2024: Elevating Security for the Edge

Veridium Joins IGEL at Disrupt 2024: Elevating Security for the Edge   We’re excited to announce that Veridium will be joining forces with our strategic partner IGEL at IGEL Disrupt 2024! This flagship event is the premier gathering for cloud workspaces and digital transformation enthusiasts, and we can’t wait to showcase how Veridium’s cutting-edge identity […]
Veridium Joins IGEL at Disrupt 2024: Elevating Security for the Edge

 

We’re excited to announce that Veridium will be joining forces with our strategic partner IGEL at IGEL Disrupt 2024! This flagship event is the premier gathering for cloud workspaces and digital transformation enthusiasts, and we can’t wait to showcase how Veridium’s cutting-edge identity authentication solutions complement IGEL’s advanced edge computing environments.   As a pioneer in revolutionizing user identity security, Veridium empowers organizations to enhance their security posture through our Identity Assurance Platform. By reliably verifying user identities and devices, we ensure that your digital workspaces are protected by AI-based identity threat protection and continuous authentication. Our platform addresses a fundamental security challenge: accurate and secure user authentication from start to finish—across virtual desktops, cloud workspaces, and beyond.   Veridium’s platform integrates seamlessly with existing Identity/SSO providers, while extending security to ZTNA, MDM, and EDR solutions. We offer the widest range of authenticators on the market, including passwordless and phishing-resistant options, FIDO tokens, and patent-protected biometric solutions (such as contactless fingerprints, facial recognition, and behavioral biometrics). Whether your organization is beginning its identity and access management (IAM) journey or refining mature processes, Veridium ensures consistent, secure authentication that keeps pace with evolving threats.   At Disrupt 2024, join us to discover how Veridium and IGEL are transforming secure access for the modern digital workspace. Experience our live demos and hear from our experts on how we’re enabling secure, seamless, and scalable solutions across VDI and DaaS environments.   Special Offer: Use coupon code DISRUPT24EXCLUSIVE to get your ticket for just 120 Euros!   Read our Data Sheet to learn more about our IGEL integration! Stay tuned for updates, and we look forward to seeing you at IGEL Disrupt 2024!

PingTalk

Ping Identity: Leading the Future of Passwordless Authentication

Eliminate passwords and user friction with Ping Identity. Learn why we're leaders in passwordless authentication in the latest Leadership Compass report.

Passwords are a security nightmare and are the biggest cause for user friction. However, getting rid of them in your environment may need a platform approach. The latest Leadership Compass report on Passwordless Authentication for Enterprises highlights Ping Identity as a leader in this space. Here's an in-depth look at why Ping Identity stands at the forefront of passwordless authentication for enterprises.


What is Banking as a Service (BaaS)?

Understand Banking as a Service (BaaS), its relation to embedded finance, and crucial identity security practices for providers.

Banking as a service (BaaS) is a model that allows non-bank businesses to offer financial services by integrating banking capabilities directly into their own products. This article will explain BaaS, how it works, and why identity and access management (IAM) solutions are necessary for earning trust. You'll also learn how IAM, including both customer identity and access management (CIAM) and workforce identity, enables BaaS to function securely and efficiently.


Okta

Secure OAuth 2.0 Access Tokens with Proofs of Possession

In OAuth, a valid access token grants the caller access to resources and the ability to perform actions on the resources. This means the access token is powerful and dangerous if it falls into malicious hands. The traditional bearer token scheme means the token grants anyone who possesses it access. A new OAuth 2.0 extension specification, Demonstrating Proof of Possession (DPoP), defines a standa

In OAuth, a valid access token grants the caller access to resources and the ability to perform actions on the resources. This means the access token is powerful and dangerous if it falls into malicious hands. The traditional bearer token scheme means the token grants anyone who possesses it access. A new OAuth 2.0 extension specification, Demonstrating Proof of Possession (DPoP), defines a standard way that binds the access token to the OAuth client sending the request elevating access token security.

The high-level overview of DPoP uses public/private keys to create a signed DPoP proof that the authorization and resource server use to confirm the authenticity of the request and requesting client. This way, the token is sender-constrained, and a token thief is less likely to use a compromised access token. Learn more about the problems DPoP solves and how it works by reading:

Elevate Access Token Security by Demonstrating Proof-of-Possession

Protect your OAuth 2.0 access token with sender constraints. Learn about possession proof tokens using DPoP.

Alisa Duncan

The primary use case for DPoP is for public clients, but the spec elevates token security for all OAuth client types. Public clients are applications where authentication code runs within the end user’s browser, such as Single-Page Applications (SPA) and mobile apps. Due to their architecture, public clients inherently have higher risk and less security in authentication and authorization. Public clients can’t leverage a client secret used by application types that can communicate to the authorization server through a “back-channel,” a network connection opaque to users, network sniffing attackers, and nosy developers. Without proper protection, a SPA may store tokens accessible to the end-user and injection-related attacks. DPoP adds an extra protection layer that makes tokens less usable if stolen.

Table of Contents

Get the starting Angular, React, or Vue project Add OAuth 2.0 and OpenID Connect (OIDC) to your application Configure OAuth scopes for Okta API calls Inspect the OAuth 2.0 bearer tokens and request resources manually Use secure coding techniques to protect your web apps Migrate your SPA to use DPoP Trace the token request requiring a DPoP nonce Request resources using DPoP headers Manually request DPoP-protected resources Store cryptographic keys in browser applications Use modern evergreen browsers for secure token handling Learn more about web security, DPoP, and OAuth 2.0

In this post, you’ll experiment with DPoP and step through migrating a public client application using OAuth bearer tokens compared to DPoP tokens. We’ll build upon the existing OAuth 2.0 Authorization Code flow. Need a refresher? Check out this post:

How Authentication and Authorization Work for SPAs

Authentication and authorization in public clients like single-page applications can be complicated! In this post, we'll walk through the Authorization Code flow with Proof Key for Code Exchange extension to better understand how it works and what do with the auth tokens you get back from the process.

Alisa Duncan

Note

This code project is best for developers with web development experience, knowledge of debugging network requests and responses, and familiarity with OAuth and OpenID Connect (OIDC).

The post uses Angular, but you can follow the concepts and network calls using a sample project in your favorite SPA framework. Check out samples using React or Vue. You’ll need to make a couple of minimal changes to the code. I will call out the changes, but I will not post the specific code or instructions.

Are you following the step-by-step code instructions in Angular? This post assumes you already have Angular knowledge. If you are an Angular newbie, start by building your first Angular app using the tutorial created by the Angular team.

A hands-on project requires tools for local web development.

Prerequisites

You’ll need the following tools:

Node.js v18 or greater A web browser with good debugging capabilities, such as Chrome Your favorite IDE. Still looking? I like VS Code and WebStorm because they have integrated terminal windows. Terminal window (if you aren’t using an IDE with a built-in terminal) Git and an optional GitHub account if you want to track your changes using a source control manager An HTTP client that shows the HTTP requests and responses, such as the Http Client VS Code extension or curl Get the starting Angular, React, or Vue project

You’ll use a starter project. These instructions are for the Angular sample project. If you are following along in React or Vue, replace the GitHub repo location with the URL for the sample you’re using.

Open a terminal window and run the following commands to get a local copy of the project in an okta-client-dpop-project directory and install dependencies. Feel free to fork the repo so you can track your changes.

git clone https://github.com/oktadev/okta-angular-dpop-example.git okta-client-dpop-project cd okta-client-dpop-project npm ci

Open the project in your favorite IDE. The project includes Okta’s client authentication SDKs, a sign-in button, a profile route that displays user information by calling the OIDC /userinfo endpoint, and a route that makes a call to Okta’s Users API. Both HTTP requests require an access token, so we’ll follow the requests and responses for these two calls.

React and Vue project instructions

React and Vue projects need a couple of changes. Change the profile component to call oktaAuth.token.getUserInfo() and display the JSON output. Add a call to Okta’s User API /api/v1/users. You’ll replace the domain name later. You may want to create a new Users component (and route) to match the Angular sample.

Use the SDK reference docs for React and Vue.

You need to set up an authentication configuration to serve the project. Let’s do so now.

Add OAuth 2.0 and OpenID Connect (OIDC) to your application

You’ll use Okta to handle authentication and authorization in this project securely. Okta APIs have built-in DPoP support — how secure and handy! We’ll experiment with DPoP in the client application by calling Okta’s APIs.

React and Vue project instructions

Replace the two redirect URIs to match the port and callback route for the application. You’ll find the URI for both in your project’s README file. Follow the instructions in the README to add the issuer and client ID to the app. Use the format for the issuer. Notice this is different from the starter code.

Before you begin, you’ll need a free Okta developer account. Install the Okta CLI and run okta register to sign up for a new account. If you already have an account, run okta login. Then, run okta apps create. Select the default app name, or change it as you see fit. Choose Single-Page App and press Enter.

Use http://localhost:4200/login/callback for the Redirect URI and set the Logout Redirect URI to http://localhost:4200.

What does the Okta CLI do?

The Okta CLI will create an OIDC Single-Page App in your Okta Org. It will add the redirect URIs you specified and grant access to the Everyone group. It will also add a trusted origin for http://localhost:4200. You will see output like the following when it’s finished:

Okta application configuration: Issuer: https://dev-133337.okta.com/oauth2/default Client ID: 0oab8eb55Kb9jdMIr5d6

NOTE: You can also use the Okta Admin Console to create your app. See Create an Angular App for more information.

Note the Issuer and the Client ID. You’ll need those values for your authentication configuration, which is coming soon.

There’s one manual change to make in the Okta Admin Console. Add the Refresh Token grant type to your Okta Application. Open a browser tab to sign in to your Okta developer account. Navigate to Applications > Applications and find the Okta Application you created. Select the name to edit the application. Find the General Settings section and press the Edit button to add a Grant type. Activate the Refresh Token checkbox and press Save.

Leave the Okta Admin console open. You’ll continue making changes in there.

I already added Okta Angular and Okta Auth JS libraries to connect our Angular application with Okta authentication.

In your IDE, open src/app/app.config.ts and find the OktaAuthModule.forRoot() configuration. Replace {yourOktaDomain} and {yourClientID} with the values from the Okta CLI.

Configure OAuth scopes for Okta API calls

We’re calling an Okta API, so we must add the required OAuth scopes.

In the Okta Admin Console, navigate to the Okta API Scopes tab in your Okta application. Find the okta.apps.read and okta.users.read.self and press the ✔️ Grant button for each.

Open the src/app/users/users.component.ts and find the call to list users: /api/v1/users. We’re taking shortcuts here, such as calling the API directly in the component for this demonstration project. In production-quality Angular apps, ensure you architect your application following best practices so you can add automated tests and troubleshoot issues quickly.

Replace {yourOktaDomain} with your Okta domain.

React and Vue project instructions

Add the two scopes to the OIDC configuration for the application. Search for “scopes” and change the array to

scopes: ['openid', 'profile', 'email', 'offline_access', 'okta.users.read.self', 'okta.apps.read'],

Replace the {yourOktaDomain} in the Okta Users API call you added in the prior section.

Start the app by running:

npm start

Open a browser tab to view the app. Open the debugging view that shows the console and network requests. Since I am using Chrome, I’ll open DevTools. Enable Preserve log in the Console and Network tabs. For the Console tab, you’ll find the preserve log option after opening the gear menu.

Let’s ensure you can sign in, call the /userinfo endpoint to see your user information, and call Okta Users API. You’ll use the Authorization Code flow and redirect to Okta for the authentication challenge. Once you emerge victorious by assuring the identity provider you are who you claim to be, the authorization server redirects you back to the application. The redirect URI includes the authorization code. Okta’s SDK (the OIDC client library) calls the /token endpoint to exchange the authorization code for tokens.

After you sign in, the Angular app will display routes for “Profile” and “Users.” Navigating these routes calls the /userinfo and Users API. If you can access the routes and don’t see any HTTP request errors, you’re good to go!

Inspect the OAuth 2.0 bearer tokens and request resources manually

After signing in, you have the OAuth 2.0 access token and the OIDC ID token. Okta stores the tokens in browser storage. In DevTools, open the Application tab to view browser storage data. Okta Auth JS defaults to local storage for tokens and is configurable based on your application needs. Expand Local storage, select the application, and expand the okta-token-storage key to see the tokens and token metadata. The tokenType property is Bearer.

Let’s see the API calls in action in the application. Navigate to both routes. In the Network tab, you see the initial /token, /userinfo, and Users API requests.

Let’s inspect the Users API request.

The request includes the Authorization header containing the token scheme and access token. You see the format Bearer <access_token>.

The entity holding the token can legitimately request resources. Let’s try using the token in another client and impersonating the actions an attacker can take if they manage to capture it.

Note

Access tokens expire quickly. If too much time passes in these next steps, you may get a 401 Unauthorized. If you do, repeat the steps with a more recent access token by navigating between the profile and user routes to trigger a call to the API. It prompts the OIDC client (the Okta Auth JS SDK) to update expired tokens.

Copy the token from the browser, and double-check you captured the entire token. Open your HTTP client and run the following HTTP request replacing {yourOktaDomain} and {yourAccessToken}:

GET /api/v1/users HTTP/1.1 Authorization: Bearer {yourAccessToken}

If you use curl, add the verbose flag to see the request and response headers:

curl -v --header "Authorization: Bearer {yourAccessToken}" /api/v1/users

The call succeeds even though the HTTP client isn’t the same client the authorization server issued the token to (the sample app).

Let’s call another endpoint with the same access token, the Okta Applications endpoint. Run the following HTTP request replacing {yourOktaDomain} and {yourAccessToken}:

GET https://alisa.oktapreview.com/api/v1/apps HTTP/1.1 Authorization: Bearer {yourAccessToken}

The call succeeds even though you call from a different client, like you saw in the prior step, calling the Users API. The call succeeds for a privileged user as long as the Okta Application has the okta.apps.read and the OIDC config has the scope. You may say that’s a lot of constraints, and you’re right. Okta adds a lot of guards when making API requests about the resources in the top-level Okta org, such as the list of Okta applications. This example demonstrates how powerful and vulnerable tokens issued for privileged users like admins are. Anyone with the token can make the same request, even if they are an attacker.

Back in the app, sign out to clear the authenticated session and tokens. We’re making changes that require you to sign in from scratch.

Use secure coding techniques to protect your web apps

All web applications must use secure coding techniques to protect from attacks, breaches, and malicious use. Public clients store their tokens within the user’s hardware and require thoughtful security practices. Read more about SPA web security and security practices within Angular in this four-part series:

Defend Your SPA from Security Woes

Learn the basics of web security and how to apply web security foundation to protect your Single Page Applications.

Alisa Duncan

It doesn’t matter if your application uses bearer tokens or DPoP; apps must employ secure coding practices. DPoP doesn’t prevent attackers from stealing your token but constrains its use. DPoP uses asymmetric encryption to prove token ownership, so you must avoid exfiltration or unauthorized use of the keyset. An attacker can create valid proofs if they get a hold of the private key.

Let’s migrate the application to DPoP and try making these HTTP requests again.

Migrate your SPA to use DPoP

Open the Okta Admin Console in the browser and navigate to Applications > Applications. Find the Okta application for this project. In the General tab, find the General Settings section and press Edit. Check the Proof of possession checkbox requiring the DPoP header in token requests. Press Save. Sign out of the Okta Admin Console.

If you try signing in again without making any code changes, you’ll see an error in the Network tab for the /token request:

HTTP/1.1 400 Bad Request { "error": "invalid_dpop_proof", "error_description": "The DPoP proof JWT header is missing." }

All HTTP requests to DPoP-protected resources (including the /token request) require proof. We must enable DPoP in the OIDC configuration.

The Okta Auth JS SDK has a configuration property for DPoP as part of the OIDC config. In your IDE, open src/app/app.config.ts and find the OktaAuthModule.forRoot() configuration. Add the dpop: true property. Your OIDC config will look something like this:

{ issuer: ..., clientId: ..., redirectUri: ..., scopes: ['openid', 'profile', 'offline_access', 'okta.users.read.self', 'okta.apps.read'], dpop: true }

Once the application rebuilds and reloads in the browser, make sure you have debugging tools open and then sign in.

Trace the token request requiring a DPoP nonce

When you sign in, you’ll see the initial call to the /token endpoint fails.

Take a look at the call’s request headers. You’ll see a header called DPoP, which contains the DPoP proof in JWT format, which means we can decode it and inspect its contents. You can use a trustworthy online tool such as JWT.io debugger or Base64 decode the header and payload sections of the JWT locally. In the JWT format, the content from the beginning up to the first .</kbd> character is the header, and the content between the two . characters is the payload.

The header contains the token type, dpop+jwt, the encryption algorithm, and the cryptographic key information tied to this proof. The payload includes minimal HTTP information and other properties to prevent token attack vectors.

{ "alg": "RS256", "typ": "dpop+jwt", "jwk": { /* Key information in JSON Web Key format */ } } { "htm":"POST", "htu":"/oauth2/v1/token", "iat":1724685617, "jti": "e84a...283bbf", }

Why did the initial call to /token fail? It’s because Okta requires an extra handshake that elevates security. The /token call requires a DPoP nonce that Okta provides included in the DPoP proof. In response to the first /token call, Okta returns the standard DPoP nonce error and the DPoP-Nonce response header containing the nonce the client incorporates into the proof.

HTTP/1.1 400 Bad Request DPoP-Nonce: "SVD....ubNc" { "error": "use_dpop_nonce", "error_description": "Authorization server requires nonce in DPoP proof." }

Okta’s Auth JS SDK has built-in support for DPoP-Nonce errors. Look at the DPoP proof token’s payload of the successful /token request. The payload includes the nonce returned in the first call.

{ "htm":"POST", "htu":"/oauth2/v1/token", "iat":1724685617, "jti": "e852...28396", "nonce":"SVD....ubNc" }

The token request succeeds, and we now have a DPoP access token.

Request resources using DPoP headers

In the app, navigating to view your profile succeeds because the SDK supports DPoP resource requests. You’ll see an error when navigating the “Users” route that calls Okta’s User API.

The HTTP response includes information about why the call errored.

HTTP/1.1 400 Bad Request WWW-Authenticate: Bearer authorization_uri="http://{yourOktaDomain}/oauth2/v1/authorize", realm="http://{yourOktaDomain}", scope="okta.users.read.self", error="invalid_request", error_description="The resource request requires a DPoP proof.", resource="/api/v1/users"

The current code to make the Users API call adds the access token using the Bearer scheme in the Authorization header, but that’s incorrect for DPoP. We must incorporate the DPoP proof and change the scheme in the HTTP request.

Open the auth interceptor in the IDE. You can find the code in the src/app/auth.interceptor.ts file.

React and Vue project instructions

Find the code you added to request Users and incorporate the Angular instructions in the project to add the DPoP proof header and the DPoP scheme.

The interceptor has a check to ensure it adds the access token to allowed origins only. Change the interceptor code as follows:

export const authInterceptor: HttpInterceptorFn = (req, next, oktaAuth = inject(OKTA_AUTH)) => { let request = req; const allowedOrigins = ['/api']; if (!allowedOrigins.find(origin => req.url.includes(origin))) { return next(request); } };

We need the proof and the authorization header. We’ll generate both using Okta Auth JS. The SDK method requires the HTTP method and URI we intend to call. The URI shouldn’t include query parameters or fragments. The SDK method returns an object with properties matching headers and their values, so we can use the spread operator to populate the DPoP-required headers.

Change the interceptor to match the code below.

import { DPoPHeaders } from '@okta/okta-auth-js'; import { defer, map, switchMap } from 'rxjs'; export const authInterceptor: HttpInterceptorFn = (req, next, oktaAuth = inject(OKTA_AUTH)) => { // allowed origin check const url = new URL(req.url); return defer(() => oktaAuth.getDPoPAuthorizationHeaders({url: `${url.origin}${url.pathname}`, method: req.method})).pipe( map((dpop: DPoPHeaders) => req.clone({ setHeaders: { ...dpop } })), switchMap((request) => next(request)) ); };

Now, if you sign in and call the Users API, you’ll get the list of users in your Okta org using DPoP.

Manually request DPoP-protected resources

Earlier, we pretended to steal the access token to make other resource requests. You called the Okta Apps API using a JWT token to see the list of all the apps your Okta org contains. What happens if we try this again when the API requires DPoP?

In DevTools, open the Network tab and find the /users call. You need both the proof and the access token for your HTTP call. Make an HTTP request:

curl -v --header "Authorization: DPoP {yourAccessToken}" --header "DPoP: {yourDPoPProof}" /api/v1/apps

The API rejected your request! You get back an error stating the DPoP proof isn’t valid:

HTTP/1.1 400 Bad Request WWW-Authenticate: DPoP algs="RS256 RS384 RS512 ES256 ES384 ES512", authorization_uri="http://{yourOktaDomain}/oauth2/v1/authorize", realm="http://{yourOktaDomain}", scope="okta.apps.read", error="invalid_dpop_proof", error_description="'htu' claim in the DPoP proof JWT is invalid."

If an attacker manages to capture both the proof and the token, they may only be able to make the same request. The proof constrains the calls to the HTTP method and URI, invalidating other HTTP requests.

How about making the same request?

curl -v --header "Authorization: DPoP {yourAccessToken}" --header "DPoP: {yourDPoPProof}" /api/v1/users

The API rejected your request! You still get back an error stating the DPoP proof isn’t valid:

HTTP/1.1 400 Bad Request WWW-Authenticate: DPoP algs="RS256 RS384 RS512 ES256 ES384 ES512", authorization_uri="http://{yourOktaDomain}/oauth2/v1/authorize", realm="http://{yourOktaDomain}", scope="okta.users.read.self", error="invalid_dpop_proof", error_description="The DPoP proof JWT has already been used.", resource="/api/v1/users"

The proof also has two other protection mechanisms: the JWT unique identifier (jit) and the issued at time (iat). When a resource server enforces the jit claim, it tracks previous calls to prevent proof reuse. So, an attacker can’t replay the proof and the access token they stole. Enforcing the JWT ID isn’t required in the DPoP spec. Another protection mechanism is the proof issue timestamp, the iat claim. Resource servers check the issue time on the proofs, and if it exceeds some threshold determined by the resource server, the server will reject the request.

Store cryptographic keys in browser applications

We must securely store the keyset within the SPA and prevent an attacker from exfiltrating them. If an attacker has the keyset, they can impersonate you and make DPoP-protected calls. Fortunately, Okta SDK uses a few different techniques to mitigate keyset hijacking without any extra coding on your part.

Local and session storage aren’t secure enough; this time, we’ll rely on IndexedDB storage. The typical use case for IndexedDB is storing a large volume of data, but it has some built-in security mechanisms that work well for protecting the keyset. The SubtleCrypto API supports generating non-exportable keys, preventing browser code from turning the private key into a portable format. IndexedDB stores the keys as a CryptoKeyPairs object and DB query results return a reference to the object, not the raw key. IndexedDB protects sensitive private keys but still works with the WebCrypto methods for signing proof.

You can inspect the keys by following the steps:

Navigate to the Applications tab in DevTools Expand IndexedDB under the Storage sidenav Expand OktaAuthJs > DPoPKeys

The downside is that the IndexedDB API is more difficult to use than other browser storage APIs. Because IndexedDB data persists, we must clean up the keys when done manually. The SDK handles cleanup if the user explicitly signs out, but we can’t guarantee a user always will. We can clear keys before signing in.

Open src/app/app.component.ts to find the signIn() method.

React and Vue project instructions

Find the code where the project calls the signInWithRedirect() method and follow the instructions described for Angular projects.

Add the call to clear keys as the first step in the signIn() method:

public async signIn() : Promise<void> { await this.oktaAuth.clearDPoPStorage(true); await this.oktaAuth.signInWithRedirect(); } Use modern evergreen browsers for secure token handling

Creating and storing cryptographic keys in JavaScript apps requires a capable browser. Modern, evergreen browsers have the API support required for DPoP. Check browser capability if your app supports users who use less modern, more questionable browsers. The Auth JS SDK has a method to check browser capability, authClient.features.isDPoPSupported(). You can add this check during application bootstrapping or initialization.

Remember, even if you aren’t using DPoP, modern browsers have more built-in security mechanisms. Stay secure, stay updated, and use safe browser practices whenever possible.

Learn more about web security, DPoP, and OAuth 2.0

In this post, you applied DPoP to a SPA and inspected DPoP in action. I hope you enjoyed it! If you want to learn more about the ways you can incorporate authentication and authorization security in your apps, you might want to check out these resources:

OAuth 2.0 and OpenID Connect overview The Identity of OAuth Public Clients Add Step-up Authentication Using Angular and NestJS Configure OAuth 2.0 Demonstrating Proof-of-Possession

Remember to follow us on Twitter and subscribe to our YouTube channel for more exciting content. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below!

Monday, 09. September 2024

SC Media - Identity and Access

Electronic payment firm Slim CD notifies 1.7M customers of data breach

The payment processing service said credit card information was accessed in June 2024.

The payment processing service said credit card information was accessed in June 2024.


liminal (was OWI)

Link Index for Customer Identity and Access Management

The post Link Index for Customer Identity and Access Management appeared first on Liminal.co.

Finicity

FinovateFall 2024: Open banking and AI set the stage for financial innovation 

When banking, fintech and finance leaders gather in New York at one of the leading fintech conferences, FinovateFall, on September 9-11, two broad topics will dominate the agenda: how new… The post FinovateFall 2024: Open banking and AI set the stage for financial innovation  appeared first on Finicity.

When banking, fintech and finance leaders gather in New York at one of the leading fintech conferences, FinovateFall, on September 9-11, two broad topics will dominate the agenda: how new regulations and the proliferation of behavioral data is enabling the age of open banking, and how artificial intelligence (AI) and machine learning can accelerate new product development, improve the customer experience and boost profits. 

Just as we expect streaming entertainment apps to offer us personalized choices, consumers and businesses today demand more digital, personalized services from their financial institutions. For decades, banks and financial institutions operated on closed ecosystems: in-person relationships were key, data was sequestered in core banking and card systems, and third-party data came from credit bureaus.  

That’s been changing recently as more businesses and consumers embrace open banking, both in response to fintech innovation and evolving data and privacy regulations. Today, application programming interfaces (APIs) enable third parties to offer services that complement bank services. In addition, new rules give consumers more control over their data and its use. These circumstances are combining to fuel a revolution in financial services

A critical topic at FinovateFall will be how financial institutions can adapt to new Consumer Financial Protection Bureau (CFPB) rules, expected to be finalized in the coming months. The new regulations will formally establish the U.S. rules for open banking. Mastercard’s Head of Data Access and Business Development for Open Banking Ben Soccorsy will speak about how all this paves the way for a bold open banking future, discussing the opportunities posed by the new rules and how banks should address them to become a data recipient,  enhance customer experience, drive innovation and, ultimately, boost profits. 

New research emphasizes the importance of open banking  

Both businesses and consumers have welcomed open banking. According to a forthcoming global Mastercard research report set to be published in September 2024, embracing open banking will be crucial to both business-to-business partnerships and maintaining consumer relationships. Among B2B survey respondents, 92% said using AI to safeguard consumer data and streamline processes is an important consideration when selecting open banking partners. Businesses also hope that open banking can improve their profitability (69%), boost their revenue (66%) and increase productivity/efficiency (65%). 

Mastercard’s Senior Vice President for Open Banking Network Services Ryan Beaudry also speaks at FinovateFall, discussing how AI and machine learning can improve such things as account-to-account payments. That’s crucial because 80% of U.S. consumers already link their financial accounts and 66% are likely to connect their bank accounts to an app or service in the future, according to the 2024 Mastercard survey.  

The same survey also found that how financial institutions handle data and open banking is important to consumers. Indeed, many of the features that attract U.S. consumers to engage with a financial services company—efficiency, convenience, security and privacy—are driving open banking innovations.  

Asked to name the top considerations when choosing which financial institutions to do business with, more than 90% of consumers said their top four priorities were: keeping their data secure, a convenient customer experience, greater control over how their data is used, and the ability to process transactions quickly.  

Once again, FinovateFall brings together thousands of senior decision-makers from financial institutions, fintechs and the investing community. With consumers and businesses becoming more digitally savvy and hungry for new innovations in how they interact with their finances, start-ups and public companies alike will show off their latest products and innovations.  

As keynote speaker and customer experience strategist Ken Hughes said ahead of the conference, “We are in a perfect storm of change, and we need to ensure that the financial services of today are fit for the customer of tomorrow.” 

If you’re at FinovateFall yourself, make sure to meet up with our open banking experts or reach out to them directly with any questions about your open banking opportunities. You can also visit our home for everything open banking and deep dive into some of our inspirational use cases.  

CFPB Guide CFPB Compliance Account Opening Payment Enablement Business Solutions

The post FinovateFall 2024: Open banking and AI set the stage for financial innovation  appeared first on Finicity.


SC Media - Identity and Access

Whitepages subjected to class action over personal data publishing

Data broker Whitepages has been sued by a retired West Virginia police officer in a class action after it allegedly published his home address, which constitutes a violation of the state's 2021 statute that prohibits the disclosure of addresses and phone numbers from active and retired law enforcement personnel.

Data broker Whitepages has been sued by a retired West Virginia police officer in a class action after it allegedly published his home address, which constitutes a violation of the state's 2021 statute that prohibits the disclosure of addresses and phone numbers from active and retired law enforcement personnel.


Malvertising campaign targets Lowe's employees

Attacks involved the creation of several ads redirecting to spoofed versions of Lowe's MyLowesLife employee portal in a bid to compromise credentials from current and former workers, according to a report from Malwarebytes Labs.

Attacks involved the creation of several ads redirecting to spoofed versions of Lowe's MyLowesLife employee portal in a bid to compromise credentials from current and former workers, according to a report from Malwarebytes Labs.


Misconfiguration exposes Confidant Health's mental health records

More than 120,000 files and over 1.7 million activity logs leaked by the database revealed Confidant Health patients' psychiatry intake notes, medical histories, disclosures of alcohol and other substance abuse, moods, memory, medications, and overall mental state.

More than 120,000 files and over 1.7 million activity logs leaked by the database revealed Confidant Health patients' psychiatry intake notes, medical histories, disclosures of alcohol and other substance abuse, moods, memory, medications, and overall mental state.


Ocean Protocol

Season 5 of the Ocean Zealy Community Campaign!

We’re happy to announce Season 5 of the Ocean Zealy Community Campaign, an initiative that has brought together our vibrant community and rewarded the most active and engaged members. 💰 Reward Pool 5,000 Ocean Tokens ($FET) that will be rewarded to the Top100 users in our leaderboard 🚀 📜Program Structure Season 5 of the Ocean Zealy Community Campaign will feature more engaging tasks

We’re happy to announce Season 5 of the Ocean Zealy Community Campaign, an initiative that has brought together our vibrant community and rewarded the most active and engaged members.

💰 Reward Pool

5,000 Ocean Tokens ($FET) that will be rewarded to the Top100 users in our leaderboard 🚀

📜Program Structure

Season 5 of the Ocean Zealy Community Campaign will feature more engaging tasks and activities, providing participants with opportunities to earn points. From onboarding tasks to Twitter engagement and content creation, there’s something for everyone to get involved in and earn points and rewards along the way.

⏰Campaign Duration: 30th of September 12:00 PM UTC

🤔How Can You Participate?

Follow this link to join and earn:

https://zealy.io/cw/onceaprotocol/questboard

Season 5 of the Ocean Zealy Community Campaign! was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ontology

Mark Cuban’s Challenge to Trump Supporters Highlights a Bigger Problem in Venture Capital…

Mark Cuban’s Challenge to Trump Supporters Highlights a Bigger Problem in Venture Capital: Transparency Mark Cuban recently put out a challenge: he wants Trump supporters to name any startups backed by the former president that don’t involve a member of his family. This seemingly simple call-out actually exposes a far deeper issue in venture capital — one that could be solved through the power of
Mark Cuban’s Challenge to Trump Supporters Highlights a Bigger Problem in Venture Capital: Transparency

Mark Cuban recently put out a challenge: he wants Trump supporters to name any startups backed by the former president that don’t involve a member of his family. This seemingly simple call-out actually exposes a far deeper issue in venture capital — one that could be solved through the power of blockchain and decentralized identities. And it’s about time someone connected the dots.

Think about it — venture capital is notoriously opaque. Most of the time, we have no idea which startups are getting funded, why certain VCs back certain founders, and what skeletons are hiding in the closets of high-profile investors. Even if someone like Trump has a rocky investment history, there’s no easy way to track it. Cuban’s challenge brings that to the forefront. If no one can name a successful Trump-backed startup, doesn’t that say something about how easily reputations in venture capital can be manipulated or shielded from scrutiny?

Now, let’s take this to the next level. What if we could bring all this on-chain? What if every venture capitalist’s track record — every investment, successful or otherwise — was tied to their decentralized identity and available for anyone to audit? Imagine a world where the power of blockchain is leveraged to not just remove middlemen, but to remove the smoke and mirrors surrounding investor reputations. Every deal, every failure, every win would be part of a permanent, transparent ledger. No more guesswork. No more empty claims. No more hiding behind family names or closed-door deals.

This concept is rooted in the heart of what Web3 promises: transparency, trust, and the ability for people to control their own data. By connecting VC histories to decentralized identities, startups would have a new tool in their arsenal — a way to verify the legitimacy and reliability of their potential investors. The days of VCs backing founders for a quick PR boost, only to ghost them when things get tough, would be over. It would empower the startup ecosystem with verifiable truth, and most importantly, accountability.

Let’s be real — venture capital needs this kind of overhaul. The recent scandals involving bad actors like Adam Neumann or the fallout from WeWork’s botched IPO are just reminders of the shady side of this industry. And don’t get me started on the “fake it till you make it” culture rampant in Silicon Valley, where founders and investors alike build smoke screens rather than sustainable businesses.In the future, blockchain and decentralized identities could make this all a thing of the past. And Ontology is leading the charge with its Decentralized Identity technology, which has the potential to create a new level of trust in these opaque markets. By offering zero-knowledge proofs and decentralized reputation systems, Ontology allows users to maintain privacy while still proving credibility. This is the solution that venture capital — and, frankly, business at large — has been waiting for.

Mark Cuban’s call for proof of Trump-backed startups may have been a jab, but it highlights something much more important. The VC world needs more transparency. Trump’s vague business reputation is just one example of how easily information can be spun, hidden, or hyped. With decentralized identity systems and reputation on-chain, we’d never have to ask these questions again. We’d know, without a doubt, who’s actually worth their salt.As we continue to develop Web3 technologies, let’s push for a world where investor reputations and venture capital histories are public, verifiable, and untouchable by spin. It’s time for the truth to come on-chain.

Interested in learning more about decentralized identities and how they can revolutionize transparency in venture capital? Explore Ontology’s decentralized identity solutions and see how we’re building the future of trust.

Mark Cuban’s Challenge to Trump Supporters Highlights a Bigger Problem in Venture Capital… was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Nov 07, 2024: Overcoming the Challenges of MFA and a Passwordless Future

Securing user identities has become a crucial focus for organizations of all sizes. The evolution from traditional passwords to Multi-Factor Authentication (MFA) and eventually to passwordless solutions introduces various challenges, such as technical obstacles, changing threat landscapes, and resource limitations.
Securing user identities has become a crucial focus for organizations of all sizes. The evolution from traditional passwords to Multi-Factor Authentication (MFA) and eventually to passwordless solutions introduces various challenges, such as technical obstacles, changing threat landscapes, and resource limitations.

Oct 09, 2024: Adopting Passwordless Authentication

As businesses shift to more flexible work models, traditional password systems pose security risks and inefficiencies. The session will provide insights from recent KuppingerCole research, offering a comprehensive view of the evolving enterprise security landscape.
As businesses shift to more flexible work models, traditional password systems pose security risks and inefficiencies. The session will provide insights from recent KuppingerCole research, offering a comprehensive view of the evolving enterprise security landscape.

Ocean Protocol

Formula 1 Racing Challenge: 2024 Strategy Analysis

F1 :: 2024 Strategy Analysis Poster ‘The Formula 1 Racing Challenge’ challenges participants to analyze race strategies during the 2024 season. They will work with lap-by-lap data to assess how pit stop timing, tire selection, and stint management influence race performance. By conducting exploratory data analysis (EDA), they will identify relationships between these variables and generat
F1 :: 2024 Strategy Analysis Poster

‘The Formula 1 Racing Challenge’ challenges participants to analyze race strategies during the 2024 season. They will work with lap-by-lap data to assess how pit stop timing, tire selection, and stint management influence race performance. By conducting exploratory data analysis (EDA), they will identify relationships between these variables and generate insights on how strategy impacts race outcomes.

Participants will apply time series analysis, regression modeling, and multivariate techniques to track tire performance, analyze pit stop patterns, and model the effects of stint management on race pace. These methods will help them quantify how strategies evolve throughout a race and produce actionable insights for future Formula 1 strategies.

Objectives

Participants will explore the relationships between tire performance, pit stop frequency, and race outcomes. They will focus on how the number and timing of pit stops affect final race positions, using statistical methods like correlation analysis and regression to validate these relationships.

Participants will also analyze how different tire compounds influence lap times, calculate average lap times for each stint, and use time series analysis to track tire degradation. They will model how tire wear impacts lap times throughout the race, examining stint lengths and the performance of Soft, Medium, and Hard tire compounds.

Data

The dataset includes detailed lap-by-lap data for the 2024 Formula 1 season, capturing key variables such as lap times, tire compounds, pit stop timings, stint lengths, and race positions. Participants will analyze this data to explore how different factors influence race outcomes. They will assess tire performance by tracking how lap times change throughout stints, comparing the performance of Soft, Medium, and Hard compounds under varying race conditions. This analysis will allow participants to quantify how long each tire type can maintain optimal performance and how pit stop decisions align with tire wear.

The pit stop data provides precise timings, allowing participants to study the relationship between pit stop frequency, duration, and race position. By applying multivariate analysis, they will model how pit stops, tire degradation, and stint lengths affect race results. Regression models will help participants predict race outcomes based on the strategic choices made by teams, such as pit stop timing and the number of stints per tire compound.

Mission

The mission of this challenge is to develop a data-driven framework for analyzing race strategies in Formula 1. Participants will use EDA and statistical analysis to understand how tire management and pit stop decisions impact race outcomes. They will quantify these impacts by calculating lap times, identifying strategic patterns, and validating their findings with hypothesis testing.

Participants will also analyze how race length affects strategy. They will investigate whether longer races lead to more pit stops or different tire choices. Time series analysis will help them track strategy shifts during longer races and compare them across teams and drivers.

Rewards

The $10,000 prize pool will be distributed among the top 10 performers:

Prize pool rewards and point distribution

Participants will also earn points toward the 2024 championship. Accumulating points correlates with increased rewards, as seen in the 2023 Championship, where top performers received an additional $10 for each point earned throughout the year.

Opportunities

This challenge is not just about winning rewards, it’s about enhancing your skills in advanced data science techniques such as regression analysis, time series modeling, and clustering algorithms. By applying these techniques to real-world racing data, you’ll learn how to analyze complex datasets, identify patterns in race strategies, and derive actionable insights that inform competitive decision-making. This experience will prepare you for roles in sports analytics and other data-driven industries, equipping you with practical expertise in strategy analysis.

How to Participate

Are you ready to join us on this quest? Whether you’re a seasoned data pro or just starting, there’s a place for you in our vibrant community of data scientists. Let’s explore and discover together on Desights, our dedicated data challenge platform. The challenge runs from September 5 until September 24, 2024, 13:00 UTC. Click here to access the challenge and become part of our data science community.

Community and Support

To engage in discussions, ask questions, or join the community conversation, connect with us on Ocean’s Discord channel #data-science-hub, the Desights support channel #data-challenge-support.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

Follow Ocean on Twitter or Telegram to keep up to date. Chat directly with the Ocean community on Discord — or track Ocean’s progress on GitHub.

Formula 1 Racing Challenge: 2024 Strategy Analysis was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 08. September 2024

KuppingerCole

Now or Never: Successful Transition From SAP Identity Management

SAP has announced the end of life for its identity management (IDM) system, which is a key component in many traditional SAP environments. This poses a challenge for organizations running on-premises SAP systems. To plan for a smooth transition, organizations should consider key strategies such as taking the time for thorough planning, thinking about the future of their IAM, and analyzing requirem

SAP has announced the end of life for its identity management (IDM) system, which is a key component in many traditional SAP environments. This poses a challenge for organizations running on-premises SAP systems. To plan for a smooth transition, organizations should consider key strategies such as taking the time for thorough planning, thinking about the future of their IAM, and analyzing requirements before choosing a new solution.

The cost of implementation projects can be significant, but investing in proper preparation and tools upfront can save time and money in the long run. It is important to take a holistic view and consider the broader picture, including GRC and access governance solutions. Finding the right solution requires support from experts who understand the market and the organization's specific requirements.



Friday, 06. September 2024

Extrimian

DIDcon: Advances in Self-Sovereign Identity in Latin America

Introduction: DIDcon Identity Day The first edition of DIDcon gathered experts from various fields in Buenos Aires, Argentina, to explore how decentralized identity technology enhances security, privacy, and data interoperability in an increasingly digitalized world. Table of Contents What is Decentralized Identity? Self-Sovereign Identity (SSI) redefines the concept of digital identity by managin
Introduction: DIDcon Identity Day

The first edition of DIDcon gathered experts from various fields in Buenos Aires, Argentina, to explore how decentralized identity technology enhances security, privacy, and data interoperability in an increasingly digitalized world.

Table of Contents What is Decentralized Identity? Summary of Talks at DIDcon Welcome and Introduction Security and Decentralization The Future of Identity Trust Ecosystems: Use Cases Conclusion What is Decentralized Identity?

Self-Sovereign Identity (SSI) redefines the concept of digital identity by managing and storing information in a decentralized manner, using technologies like blockchain. This model allows individuals to control their personal information without relying on centralized intermediaries, significantly improving data security and privacy.

Summary of Talks at DIDcon Welcome and Introduction

https://www.os.city/Jesús Cepeda, CEO and co-founder of OS City, and Diego Fernández, Secretary of Innovation and Digital Transformation of GCBA, opened the event by emphasizing decentralized identity as an essential tool that returns control of information to users. They highlighted how this technology unlocks global economic potential and combats cybercrime and the frictions of less intuitive solutions. They also pointed to QuarkID as an innovative example of how Latin America is implementing decentralized identity to enhance citizen security and privacy.

Security and Decentralization

In this talk moderated by Alfonso Campenni, Pablo Sabbatella, security researcher at SEAL and founder of Defy Education, emphasized how scams and cybercrimes have become more sophisticated. To combat this, he discussed how decentralization is an interesting path that strengthens the protection and security of information.

During the recent digital security panel, Pablo Sabbatella, an expert in the field, shared valuable recommendations for protecting our identities and data online. He stressed the importance of adopting safe practices in the digital age, especially in the context of increasing cyberattacks and vulnerabilities in the applications we use daily.

Main Security Recommendations by Pablo Sabbatella: Avoid Repeating Passwords: It’s crucial to have unique passwords for each service to prevent cross-access in case of data breaches. Use Two-Factor Authentication (2FA): Adding a second level of security is crucial. It is recommended to use code-generating apps instead of SMS or emails, which are less secure. Be Cautious with Personal Data: It is vital to limit the personal information shared online and in applications, especially the phone number, which is a sensitive piece of data. Avoid Downloading Pirated Software: Unofficial programs and applications can contain malware and seriously compromise personal and financial security.

These guidelines not only increase individual security but also foster a culture of awareness about online safety, which is essential for navigating safely in today’s digital world.

He also mentioned new standards being built for the implementation of Account Abstraction through smart contracts, which enhance key management and user experience.

The Future of Identity

In a panel moderated by Pablo Mosquella of Extrimian, experts such as Guillermo Villanueva, CEO and co-founder of Extrimian, Matthias Broner, Head of Growth LATAM at ZKsync, Mateo Sauton from Worldcoin, and Pedro Alessandri, Undersecretary of Smart City, debated how decentralized identity is transforming the digital landscape, creating a safer, more private, scalable, and interoperable environment. They also discussed the positive impact of QuarkID and its rapid expansion across Latin America, underscoring its potential to strengthen digital trust in the region.

Trust Ecosystems: Use Cases

In this session moderated by Lucas Jolías from OS City and Fabio Budris, Advisor to the Secretary of Innovation of the City of Buenos Aires, concrete use cases of decentralized identity were presented in managing procedures in Salta, at the National Technological University (UTN), and in pilot tests for organ transplant management at INCUCAI. These examples clearly illustrated the tangible impact of these technologies in key sectors such as government, education, and health.

Conclusion

DIDcon – Identity Day underscored the transformative power of Decentralized Identity to revolutionize society and maximize value in the physical, digital, and hybrid worlds. Initiatives like QuarkID are driving Latin America toward a more secure and reliable digital future, overcoming barriers that have historically limited its technological potential.

The adoption of these technologies not only promises to improve security and privacy but is also building a solid digital trust ecosystem that will bring significant benefits to all the involved countries.

Keywords: decentralization, SSI, DID, VC, QuarkID, Extrimian, blockchain, trust, security, privacy, interoperability, technology, digital identity.

The post DIDcon: Advances in Self-Sovereign Identity in Latin America first appeared on Extrimian.


PingTalk

Policy Based Access Control (PBAC) Explained

Discover how Policy Based Access Control (PBAC) works, its benefits, and implementation steps tailored for financial services.

Traditional access control methods, such as role-based access control (RBAC) and attribute-based access control (ABAC), have built the foundation for securing systems and managing user access. 

 

However, they fail to provide the flexibility and enhanced security needed in today’s dynamic environment–especially for the financial services industry. As organizations navigate stringent compliance requirements and evolving security threats, they need a better alternative to make dynamic, context-aware access decisions–like policy-based access control (PBAC). 

 

Below, we’ll explore PBAC in further detail, how it compares to other models, and how it benefits the financial services industry.

Thursday, 05. September 2024

IdRamp

Account Takeover Attack (ATO) Defense: A Guide to Protecting Your Company

Account takeover (ATO) attacks have become a sophisticated and pervasive threat, with criminal organizations targeting businesses of all sizes and types. By gaining unauthorized access to company accounts, attackers can disrupt operations, steal sensitive data, and damage a company’s reputation. The post Account Takeover Attack (ATO) Defense: A Guide to Protecting Your Company first appeared on I

Account takeover (ATO) attacks have become a sophisticated and pervasive threat, with criminal organizations targeting businesses of all sizes and types. By gaining unauthorized access to company accounts, attackers can disrupt operations, steal sensitive data, and damage a company’s reputation.

The post Account Takeover Attack (ATO) Defense: A Guide to Protecting Your Company first appeared on Identity Verification Orchestration.

KuppingerCole

Authenticating Identities in the Age of AI: Strategies for Trustworthy Verification

In today's digital world, identity authenticity faces constant scrutiny, especially with the emergence of generative AI. However, modern tech provides innovative solutions. Chipped identity documents offer a trusted verification basis, embedding secure chips with verified data. Advancements like biometric authentication and blockchain-based verification ensure enhanced security and integrity. With

In today's digital world, identity authenticity faces constant scrutiny, especially with the emergence of generative AI. However, modern tech provides innovative solutions. Chipped identity documents offer a trusted verification basis, embedding secure chips with verified data. Advancements like biometric authentication and blockchain-based verification ensure enhanced security and integrity. With these innovations, organizations can navigate identity verification confidently.

Join identity experts from KuppingerCole Analysts and InverID as they explore the pivotal role of chipped identity documents in reliable verification and their integration into eIDAS 2.0-compliant identity wallets. Discover strategies for establishing trust amidst faux realities, ensuring the integrity of digital identities.

Annie Bailey, Research Strategy Director at KuppingerCole Analysts, will discuss the implications of eIDAS 2.0 legislation and its impact on identity management. She will explain the concept of reusable verified identities and their significance in a multi-wallet ecosystem, as well as offer insights into preparing for a future with diverse credentials and the challenges it presents.

Wil Janssen, Co-founder and CRO of InverID, will explain the critical need for remote identity verification in today's digital landscape. He will illustrate how to leverage government-issued identity documents for secure verification, as well as highlight the importance of identity verification services in EU Wallets and beyond.




auth0

External User Verification with Forms

Learn how to leverage Auth0 Forms to implement an invitation code workflow and improve the onboarding of your SaaS users.
Learn how to leverage Auth0 Forms to implement an invitation code workflow and improve the onboarding of your SaaS users.

Evernym

Ensuring Compliance with Regulatory Requirements in Digital Security

Ensuring Compliance with Regulatory Requirements in Digital Security In an increasingly regulated world, ensuring compliance with... The post Ensuring Compliance with Regulatory Requirements in Digital Security appeared first on Evernym.

Ensuring Compliance with Regulatory Requirements in Digital Security In an increasingly regulated world, ensuring compliance with digital security requirements is crucial for organizations of all sizes. Regulations and standards are designed to protect sensitive data, ensure privacy, and enhance the overall security of digital systems. However, navigating these requirements can be ...

The post Ensuring Compliance with Regulatory Requirements in Digital Security appeared first on Evernym.


Elliptic

Hong Kong Kicks Off Tokenization Sandbox with Major Institutional Players

Hong Kong has taken yet another important step to bolster its position as a leader in the Asia-Pacific region for well-regulated cryptoasset and blockchain innovation. 

Hong Kong has taken yet another important step to bolster its position as a leader in the Asia-Pacific region for well-regulated cryptoasset and blockchain innovation. 


Ocean Protocol

DF105 Completes and DF106 Launches

Predictoor DF105 rewards available. DF106 runs Sept 5— Sept 12, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 105 (DF105) has completed. DF106 is live today, Sept 5. It concludes on September 12. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE 
Predictoor DF105 rewards available. DF106 runs Sept 5— Sept 12, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 105 (DF105) has completed.

DF106 is live today, Sept 5. It concludes on September 12. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF106 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF106

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF105 Completes and DF106 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Okta

Elevate Access Token Security by Demonstrating Proof-of-Possession

We use access tokens to request data and perform actions within our software systems. The client application sends a bearer token to the resource server. The resource server checks the validity of the access token before acting upon the HTTP request. What happens if the requesting party is malicious, steals your token, and makes a fraudulent API call? Would the resource server honor the HTTP reque

We use access tokens to request data and perform actions within our software systems. The client application sends a bearer token to the resource server. The resource server checks the validity of the access token before acting upon the HTTP request. What happens if the requesting party is malicious, steals your token, and makes a fraudulent API call? Would the resource server honor the HTTP request? If you use a bearer token, the answer is “yes.”

My teammate wrote that an access token is like a hotel room keycard. If you have a valid keycard, anyone can use it to access the room. If you have a valid access token, anyone can use it to access a resource server.

7 Ways an OAuth Access Token is like a Hotel Key Card

Learn 7 things OAuth 2.0 access tokens have in common with a hotel key card.

Aaron Parecki

Bearer tokens (and static API keys) mean whoever presents the valid token to the resource server has access, which makes the token powerful and vulnerable. We can look at high-profile token thefts to see how prevalent and disastrous token theft is, so we want to ensure our applications aren’t vulnerable to similar attacks.

To protect tokens, we incorporate secure coding techniques into our apps, configure a quick expiration time on the token, and ensure only requests sent to allowed origins include the access token. Still, token attacks pose a risk to highly sensitive resources. What more can we do to secure requests?

This post describes a new OAuth 2.0 spec supported by Okta that makes access tokens less prone to misuse and helps mitigate security risks. If you want to refresh your OAuth knowledge, check out What the heck is OAuth.

Table of Contents

Bind OAuth 2.0 access tokens to client applications Demonstrate proof of possession (DPoP) using JWTs Incorporating DPoP into OAuth 2.0 token requests Use DPoP-bound access tokens in HTTP requests Extend the DPoP flow with an enhanced security handshake Validate DPoP requests in the resource server Learn more about OAuth 2.0, Demonstrating Proof-of-Possession, and secure token practices Bind OAuth 2.0 access tokens to client applications

If we go back to the hotel keycard analogy, we want a hotel keycard that only you can use and that links you as the rightful user of the hotel keycard.

In the OAuth world, ideally, we want to link the authorization server, the client, and the access token and limit token use to the client. In OAuth terminology, the sender and client application are the same entity. By linking these entities, external parties can’t misuse the access token.

OAuth 2.0 defines a few methods to bind access tokens.

🤐 Client secret
Confidential clients are applications running in a protected environment where user authentication and token storage occur within backend servers, such as traditional server-rendered web applications. Confidential clients can use a secret value known to the requestor (the client application requesting the tokens) and the authorization server as part of HTTP requests. The client secret is a long-lived value generated by the authorization server. However, malicious parties who steal the secret can use it. 🌐 Mutual TLS Client Authentication and Certificate-Bound Access Tokens (mTLS)
Mutual authentication means parties at the ends of the network connection identify themselves using a combination of asymmetric encryption and TLS certificate as part of the HTTP request. mTLS is a highly secure method for confidential clients but can be complex to implement and maintain. 🔒 Private key JSON Web Token (JWT)
Machine-to-machine HTTP requests don’t have user context. The requesting service often uses a combination of an ID and secret using the Basic authorization scheme when making HTTP calls, but doing so isn’t secure. Private key JWTs offer a more secure approach. The requesting service uses asymmetric encryption to sign any JWTs it creates.

These methods apply only to confidential clients that can maintain secrets, not to public clients.

Public clients are apps that run authentication code within the user’s hardware, such as in Single-Page Applications (SPA) and mobile clients. Software applications use public client architecture but contain avenues for token security exploits without careful protection. Is there an alternative that works for confidential and public clients without incurring costly implementation and maintenance?

Demonstrate proof of possession (DPoP) using JWTs

There’s now a solution for all client types calling sensitive resources! The IETF published a new extension to OAuth 2.0: Demonstrating Proof of Possession (DPoP), targeted primarily for public client use. You may have heard of this idea before, as the concept has been around for a while. With a published spec, it’s now official, standardized, and supported!

The client and authorization server work together to generate tokens with proof of possession.

The client creates non-repudiable proof of ownership using asymmetric encryption The authorization server uses this proof when generating the token

How is this different from earlier methods that bind the caller to the access token? The big difference is this method happens at runtime across any client type. Confidential clients have cryptographic libraries supporting public/private key encryption, but a gap exists for public clients. Thanks to enhanced browser API capabilities such as the Web Crypto API and SubtleCrypto, modern browser-based JavaScript apps can also use DPoP.

🚨 You must protect the client from Cross-Site Scripting (XSS) and Remote File Inclusion (RFI) attacks to prevent exfiltration or unauthorized use of the keyset. 🚨

Store the keys in a storage format that someone can’t export and guard the app against attacks where an attacker’s code can run in the user’s context. Use up-to-date secure SPA frameworks, employ defensive coding practices, and add appropriate Content Security Policies (CSP) to protect the client. Apply secure header best practices and consider using the Trusted Types API if you can limit end-user browser usage to browsers that support it.

⚠️ Note

We will investigate DPoP proofs and inspect how the client constructs them. However, despite this knowledge, you should always use Okta SDKs or a vetted, well-maintained library with built-in DPoP support when making requests using DPoP.

Incorporating DPoP into OAuth 2.0 token requests

When using DPoP, the client creates a “proof” using asymmetric encryption. The proof is a JWT, which includes the URI, the HTTP method of the request, and the public key. The client application requests tokens from the authorization server and includes the proof as part of the request. The authorization server binds a public key hash and the HTTP request information from the proof within the access token it returns to the client. This means the access token is only valid for the specific HTTP request.

A sequence diagram for the OAuth 2.0 Authorization Code flow with DPoP looks like this:

The proof contains metadata proving the sender and ways to limit unauthorized use by limiting the HTTP request, the validity window, and reuse. If you inspect a decoded DPoP proof JWT, you’ll see the header contains information proving the sender:

The typ claim set to dpop+jwt The public/private key encryption algorithm The public key in JSON Web Key (JWK) format

Inspecting the decoded proof’s payload shows claims that limit unauthorized use, such as:

HTTP request info including the URI and HTTP method (such as /oauth2/v1/token and POST) Issue time to limit the validity window for the proof An identifier that’s unique within the validity window to mitigate replay attacks

Let’s inspect the /token request a little further. When making the request, the client adds the proof in the header. The rest of the request, including the grant type and the code itself, remains the same for the Authorization Code flow.

POST /oauth2/v1/token HTTP/1.1 DPoP: eyJ0eXAiOiJkcG9w.....H8-u9gaK2-oIj8ipg Accept: application/json Content-Type: application/x-www-form-urlencoded grant_type=authorization_code code=XGa_U6toXP0Rvc.....SnHO6bxX0ikK1ss-nA

The authorization server decodes the proof and incorporates properties from the JWT into the access token. The authorization server responds to the /token request with the token and explicitly sets the response header to state the token type as DPoP.

HTTP/1.1 200 OK Content-Type: application/json { "access_token":"eyJhbG1NiIsPOk.....6yJV_adQssw5c", "token_type":"DPoP", "expires_in":3600, "refresh_token":"5PybPBQRBKy2cwbPtko0aqiX" }

You now have a DPoP type access token with a possession proof. What changes when requesting resources?

Use DPoP-bound access tokens in HTTP requests

DPoP tokens are no longer bearer tokens; the token is now “sender-constrained.” The sender, the client application calling the resource server, must have both the access token and a valid proof, which requires the private key held by the client. This means malicious sorts need both pieces of information to impersonate calls into the server. The spec builds in constraints even if a malicious sort steals the token and the proof. The proof limits the call to a unique request for the URI and method within a validity window. Plus, your application system still has the defensive web security measures applicable to all web apps, preventing the leaking of sensitive data such as tokens and keysets.

The client generates a new proof for each HTTP request and adds a new property, a hash of the access token. The hash further binds the proof to the access token itself, adding another layer of sender constraint. The proof’s payload now includes:

HTTP request info including the URI and HTTP method (such as https://{yourResourceServer}/resource and GET) Issue time to limit the validity window for the proof An identifier that’s unique within the validity window to mitigate replay attacks Hash of the access token

Clients request resources by sending the access token in the Authorization header, along with proof demonstrating they’re the legitimate holders of the access token to resource servers using a new scheme, DPoP. HTTP requests to the resource server change to

GET https://{yourResourceServer}/resource HTTP/1.1 Accept: application/json Authorization: DPop eyJhbG1NiIsPOk.....6yJV_adQssw5c DPoP: eyJhbGciOiJIUzI1.....-DZQ1NI8V-OG4g

The resource server verifies the validity of the access token and the proof before responding with the requested resource.

Extend the DPoP flow with an enhanced security handshake

DPoP optionally defines an enhanced handshake mechanism for calls requiring extra security measures. The client could sneakily create proofs for future use by setting the issued time in advance, but the authorization and resource servers can wield their weapon, the nonce. The nonce is an opaque value the server creates to limit the request’s lifetime. If the client makes a high-security request, the authorization or resource server may issue a nonce that the client incorporates within the proof. Doing so binds the specific request and time of the request to the server.

An example of a highly secure request is when making the initial token request. Okta follows this pattern. Different industries may apply guidance and rules for the types of resource server requests requiring a nonce. Since the enhancement requires an extra HTTP request, use it minimally.

When the authorization server’s /token request requires a nonce, the server rejects the request and returns an error. The response includes a new header type, DPoP-Nonce, with the nonce value, and a new standard error message, use_dpop_nonce. The flow for requesting tokens now looks like this:

Let’s look at the HTTP response from the authorization and resource servers requiring a nonce. The authorization server responds to the initial token request with a 400 Bad Request and the needed nonce and error information.

HTTP/1.1 400 Bad Request DPoP-Nonce: server-generated-nonce-value { "error": "use_dpop_nonce", "error_description": "Authorization server requires nonce in DPoP proof" }

When the resource server requires a nonce, the response changes. The resource server returns a 401 Unauthorized with the DPoP-Nonce header and a WWW-Authenticate header containing the use_dpop_nonce error message.

HTTP/1.1 401 Unauthorized DPoP-Nonce: server-generated-nonce-value WWW-Authenticate: error="use_dpop_nonce", error_description="Resource server requires nonce in DPoP proof"

We want that resource, so it’s time for a new proof! The client reacts to the error and generates a new proof with the following info in the payload:

HTTP request info including the URI and HTTP method (such as https://{yourResourceServer}/resource and GET) Issue time to limit the validity window for the proof An identifier that’s unique within the validity window to mitigate replay attacks The server-provided nonce value Hash of the access token

With this new proof, the client can remake the request.

Validate DPoP requests in the resource server

Okta’s API resources support DPoP-enabled requests. If you want to add DPoP support to your own resource server, you must validate the request. You’ll decode the proof to verify the properties in the header and payload sections of the JWT. You’ll also need to verify properties within the access token. OAuth 2.0 access tokens can be opaque, so use your authorization server’s /introspect endpoint to get token properties. Okta’s API security guide, Configure OAuth 2.0 Demonstrating Proof-of-Possession has a step-by-step guide on validating DPoP tokens, but you should use a well-maintained and vetted OAuth 2.0 library to do this for you instead. Finally, enforce any application-defined access control measures before returning a response.

Learn more about OAuth 2.0, Demonstrating Proof-of-Possession, and secure token practices

I hope this intro to sender-constrained tokens is helpful and inspires you to use DPoP to elevate token security! Watch for more content about DPoP, including hands-on experimentation and code projects. If you found this post interesting, you may also like these resources:

Secure OAuth 2.0 Access Tokens with Proofs of Possession Why You Should Migrate to OAuth 2.0 From Static API Tokens How to Secure the SaaS Apps of the Future Step-up Authentication in Modern Application OAuth 2.0 Security Enhancements Add Step-up Authentication Using Angular and NestJS

Remember to follow us on Twitter and subscribe to our YouTube channel for more exciting content. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below!


PingTalk

Strong Customer Authentication & Compliance Under PSD2

Understand Strong Customer Authentication (SCA) and PSD2 compliance. Learn about requirements, best practices, and exemptions.

In the physical world, it’s relatively straightforward for banks, credit card issuers, and other institutions to verify a customer’s identity with a valid ID before they can access their accounts. But, when it comes to securing online accounts and payment services, it isn’t as cut and dry. 

 

Single-factor authentication methods, like password-based security, no longer suffice in the modern landscape. It’s now the standard to use multiple authentication factors to ensure customers are who they claim to be. 

 

Global regulators have taken notice of the rising threats to online account security and passed legislation to standardize and strengthen authentication requirements in the financial sector. This includes the introduction of strong customer authentication (SCA) requirements, which are now enforced throughout the EU and the UK.

 

Below, we’ll cover strong customer authentication, the SCA requirements set out in PSD2, and what to expect from new legislation PSD3 and PSR1. 

Wednesday, 04. September 2024

Spherical Cow Consulting

Why FIPS 140-3 Matters for Cryptography and Digital Identity Security

Cryptography is all about securing communications. Authentication, key exchange, token signing, digital signatures, zero-knowledge proofs, and so much more depend on cryptographic algorithms that no mere mortal (by which I mean me) will ever understand. The good news is that mere mortals do not need to understand these algorithms. Governments have the resources to truly… Continue reading Why FIPS

Cryptography is all about securing communications. Authentication, key exchange, token signing, digital signatures, zero-knowledge proofs, and so much more depend on cryptographic algorithms that no mere mortal (by which I mean me) will ever understand. The good news is that mere mortals do not need to understand these algorithms. Governments have the resources to truly dig into these algorithms and determine whether they are as secure and effective as intended. In the U.S., something called FIPS 140 sits at the heart of determining whether a cryptographic module—the actual hardware or software implementing these algorithms—is secure enough.

FIPS 140-3 is the latest iteration of the U.S. Federal Information Processing Standard (FIPS) that specifies the security requirements for cryptographic modules used by federal agencies and other organizations to protect sensitive information. If you have a cybersecurity company that does business with the U.S. Government, then you care about FIPS 140-3. If you don’t have a cybersecurity company but buy cybersecurity tools, knowing that the cryptographic modules they use to secure your data meet the FIPS 140-3 standards is a Very Good Thing.

If you aren’t involved in tech purchasing decisions for your company, this post will serve as interesting trivia for you to wow your geeky friends with over beverages. Apologies in advance for all the acronyms; they can’t be avoided if you’re in the world of tech.

Definitions

First, let’s get a few definitions out there:

Cryptography: Refers to the broader field of securing communications through mathematical techniques. Cryptographic Algorithm: A specific method or procedure, like AES or RSA, used within the field of cryptography to encrypt or decrypt data, sign messages, or generate keys. Cryptographic Module: A hardware or software component that implements cryptographic algorithms and provides secure services like encryption, decryption, authentication, or key management. FIPS 140

The first FIPS 140 was published thirty years ago (where has time gone???). The U.S. federal government realized it needed to get a handle on how the government as a whole needed to use cryptographic modules in its tech. Prior to that, it was something of a free-for-all. Each agency made its own decisions about what information and staff it had on hand. Not great.

The best thing about version 1 of anything is that it suddenly sparks all SORTS of discussion. There are new requirements, positive and negative feedback, and a desire to improve. That resulted in FIPS 140-2, published over 20 years ago in 2001. (I’m still feeling old here.) FIPS 140-2 provided clearer definitions and more detailed requirements. Just as well as the science of cryptography advanced and new cryptographic algorithms needed to be considered.

The U.S. Government obviously isn’t the only entity out there working out the best way to evaluate cryptographic algorithms. That’s where the International Organization for Standardization (ISO) came in. In 2012, ISO published ISO/IEC 19790:2012, “Information technology — Security techniques — Security requirements for cryptographic modules.” The U.S. National Institute of Standards and Technology (NIST) was a member of the team making that global standard. As it came time to yet again refresh FIPS 140, it made sense to point it to ISO/IEC 19790:2012. That’s now FIPS 140-3.

Cryptographic Module Validation Program (CVMP)

So now there’s a standard, updated over time, that says, “Here are the requirements for cryptographic modules to be used by the federal government.” Great! How does the government ensure that those modules meet those requirements? That’s where the Cryptographic Module Validation Program (CVMP) comes in.

The CVMP is a joint effort between the NIST and the Canadian Centre for Cyber Security. It provides guidelines for accredited laboratories (Cryptographic and Security Testing Laboratories (CSTL). From those guidelines, the laboratories verify that a cryptographic module submitted by a vendor satisfies the requirements. The CSTL’s findings are submitted back to the program. If everything is copacetic, the module is added to the list of modules federal agencies can accept in their tools and services.  

FIPS 140, the CVMP, and Digital Identity

So, how does this all tie into the world of digital identity? I have a list!

There are two things in particular to remember. First, of course, is noting that cryptography is used in a variety of ways when it comes to digital identity. Encrypting tokens, signatures, keys, and more is a fundamental necessity. Second, the federal government spends a mind-boggling amount on cybersecurity. This means their requirements for cybersecurity—such as the cryptographic modules used in the tools and services they purchase—influence almost everything in the cybersecurity industry. While following the FIPS 140 guidelines is only _required_ for federal agencies, in practice, its reach is much broader.

Given those points, FIPS 140-3 helps lay the groundwork for secure digital identity by ensuring that the cryptographic modules used are not just good, but government-approved good. And if that isn’t enough, given that FIPS 140-3 now basically points to an internationally developed standard in the form of ISO/IEC 19790:2012, then you’re talking about something that has achieved consensus on a global scale. That’s a level of assurance that goes beyond just checking a box. It’s knowing that the systems managing your identity are backed by some of the best cryptographic practices in the world.

Wrap Up

As a regular consumer, you really don’t need to know about FIPS 140 and its associated validation program. As a cybersecurity practitioner, you should at least be aware that it’s there and its implications. And as an executive that has responsibility for the security of your company or what goes into your products, all of this should be familiar to you already.

This is going to be an area I learn more about over the next few months. And since I learn best through writing, you can expect more blog posts on the topic of how the U.S. Government thinks about cryptographic modules. Stay tuned!

I want to help you go from overwhelmed at the rapid pace of change in identity-related standards to prepared to strategically invest in the critical standards for your business. Follow me on LinkedIn or reach out to discuss my Digital Identity Standards Development Services.

The post Why FIPS 140-3 Matters for Cryptography and Digital Identity Security appeared first on Spherical Cow Consulting.


KuppingerCole

Oct 15, 2024: A False Sense of Security: Authentication Myths That Put Your Company at Risk 

In today's digital landscape, organizations often fall prey to a false sense of security, particularly concerning authentication practices. Misconceptions about identity security can leave companies vulnerable to evolving threats, potentially compromising sensitive data and systems. Understanding the realities behind these myths is crucial for developing robust authentication strategies.
In today's digital landscape, organizations often fall prey to a false sense of security, particularly concerning authentication practices. Misconceptions about identity security can leave companies vulnerable to evolving threats, potentially compromising sensitive data and systems. Understanding the realities behind these myths is crucial for developing robust authentication strategies.

Ontology

Decentralized Identity and Reputation: Balancing Freedom and Regulation in Digital Platforms

Decentralized Identity and Reputation: Balancing Freedom and Regulation in Digital Platforms In today’s digital landscape, the rapid pace of technological innovation has brought us to a crossroads, where the ideals of privacy, autonomy, and freedom meet the very real challenges of regulation. While decentralized platforms promise a world free from the prying eyes of governments and corporations,
Decentralized Identity and Reputation: Balancing Freedom and Regulation in Digital Platforms

In today’s digital landscape, the rapid pace of technological innovation has brought us to a crossroads, where the ideals of privacy, autonomy, and freedom meet the very real challenges of regulation. While decentralized platforms promise a world free from the prying eyes of governments and corporations, they also pose significant challenges, particularly when they are used to facilitate illegal activities. Take, for example, the infamous cases of Silk Road, Tornado Cash, and Telegram — each a flashpoint in the ongoing battle between technological freedom and the need for regulation. But what if there were a way to strike a balance? A decentralized reputation system, paired with anonymous identities, could offer a middle ground, where freedom meets responsibility.

The Evolution of Privacy Platforms: Case Studies Silk Road: The Dark Web’s Pioneer

Silk Road was more than just an online black market; it was the first glimpse into a future where decentralized platforms could operate outside the reach of traditional law enforcement. Founded by Ross Ulbricht in 2011, Silk Road leveraged Bitcoin and the Tor network to create a truly global, anonymous marketplace. It was a hub for illegal activities — primarily drug trafficking — hidden from the watchful eyes of the law. The importance of Silk Road lies not just in its role as a market but in how it demonstrated the power of cryptocurrencies and decentralized platforms. It set a precedent, showing how these technologies could facilitate both freedom and crime on a massive scale.

Tornado Cash: Anonymizing Cryptocurrency Transactions

Tornado Cash pushed the boundaries of financial privacy. This cryptocurrency mixer on the Ethereum blockchain provided users with the tools to anonymize their transactions, protecting their financial data from surveillance. But with great power comes great responsibility — or, in this case, irresponsibility. Tornado Cash became a haven for money laundering, exploited by criminals and even North Korean hackers. The arrest of Tornado Cash developer Alexey Pertsev by Dutch authorities in August 2022 sparked a heated debate about the balance between privacy and security, and whether developers should be held accountable for the misuse of their creations.

Telegram: A Platform for Secure Communication

Telegram’s commitment to privacy and encryption has made it the go-to app for nearly 1 billion users seeking secure communication. From activists to journalists, many rely on Telegram to protect their privacy in the face of government surveillance. However, while Telegram is not decentralized, its strong encryption and anonymity features have also made it attractive to criminal organizations, coordinating everything from drug trafficking to child exploitation. The recent arrest of Telegram’s CEO, Pavel Durov, in France has intensified the debate about the role of tech platforms in moderating content and their accountability for illegal activities.

The Regulatory Response: Challenges and Consequences

The arrests of figures like Ulbricht, Pertsev, and Durov are part of a broader governmental push to regulate decentralized and privacy-focused platforms. But this raises some tough questions: Are we stifling innovation and free speech in the process? The legal complexities of regulating these platforms, especially when it comes to holding developers accountable, highlight the difficulty in balancing privacy with security.

Proposed Solution: Decentralized Identity and Reputation Systems

So, how do we move forward? One potential solution lies in the development of decentralized reputation systems paired with anonymous identities. Imagine a world where users can maintain their privacy while building a reputation based on their actions within the community. Such a system could empower communities to self-regulate, reducing the need for external oversight.

Anonymous Identity Systems

Anonymous identity systems could be the key to balancing privacy with accountability. These systems would allow users to engage with decentralized platforms without revealing their true identities, while still being held accountable for their actions.

Decentralized Reputation Systems

A decentralized reputation system could serve as a form of self-regulation. Users would build reputations based on their behavior, with ethical actions rewarded and illegal activities flagged or excluded. This could mitigate the need for heavy-handed regulation while preserving the core values of decentralization.

Practical Considerations and Challenges

Of course, implementing such systems won’t be without challenges. From technical limitations to potential exploitation, these solutions require careful design and community buy-in. But with transparency and engagement, we could create a system that balances freedom with responsibility.

Conclusion

The stories of Silk Road, Tornado Cash, and Telegram underscore the dual-edged sword of privacy-focused technology. While these platforms offer unprecedented privacy and autonomy, they also create new avenues for crime. A balanced approach, using decentralized reputation systems and anonymous identities, could offer a path forward. As we continue to navigate the digital age, it’s essential that we foster dialogue between innovators, regulators, and users to ensure that technology serves the greater good, protecting both freedom and security in this brave new world.

Decentralized Identity and Reputation: Balancing Freedom and Regulation in Digital Platforms was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


BlueSky

Bem Vindos ao Bluesky!

Que semana! Nos últimos dias mais de 2.6 milhões de usuários se registraram na plataforma, sendo que mais de 85% são Brasileiros. Sejam muito bem vindos, estamos muito contentes por tê-los aqui!

Que semana! Nos últimos dias mais de 2.6 milhões de usuários se registraram na plataforma, sendo que mais de 85% são Brasileiros. Sejam muito bem vindos, estamos muito contentes por tê-los aqui!

Qual o diferencial do Bluesky?

Por base, o Bluesky te prioriza e te dá mais controle. Aqui você pode escolher a experiência social que mais te agrada.

Nossa comunidade cresceu organicamente e está cheia de autores, artistas, jornalistas, políticos, entre outros. Os usuários brasileiros que já usam a plataforma notam que eles têm uma qualidade de engajamento com muito mais qualidade do que em qualquer outra plataforma.

Além disso, o Bluesky é um ecossistema aberto. Nós criamos uma rede social aberta para que qualquer desenvolvedor seja capaz de modificá-la através do AT Protocol (o nome dele é Atmosphere). Essa abertura diz respeito ao fato do Bluesky ser um projeto colaborativo, diferente de outras redes sociais que são controladas por uma única empresa. Qualquer um pode construir feeds, moderar e até criar aplicativos completamente novos usando a nossa plataforma.

Quando o Bluesky vai liberar vídeos e trending topics?

Os vídeos já estarão disponíveis na nossa próxima grande atualização, e nós também já estamos trabalhando nos trending topics. Estamos ligados nos feedbacks de vocês e super contentes com essa animação.

Quais são as particularidades do Bluesky? Feeds Customizados

Além do cronológico feed Seguindo e o clássico Discover, você pode experimentar feeds novos! Por exemplo, caso você queira ver postagens dos seus amigos que não postam muito — tente Quiet Posters. Caso queira ver o conteúdo mais postado no dia anterior em toda a plataforma, experimente o Catch Up.

Qualquer um pode criar e se inscrever nos feeds. Ao invés de providenciarmos apenas um algoritmo, nós deixamos nossos usuários escolherem. Você está no controle. Dessa forma a ideia é promover discussões mais saudáveis mesmo porque não incentivamos esquemas para aumentar engajamento, desinformação, fake news ou qualquer tipo de abuso.

Nomes de Usuário

Caso você seja dono de um site, pode usar o nome dele como nome de usuário. Por exemplo, a Folha de S. Paulo escolheu usar @folha.com como nome. Ah, mas não esqueça que você só pode usar o nome de usuário de um site que seja seu, pois essa é uma forma de mostrar que, por exemplo, você é a Folha de S. Paulo real. Esse é um jeito de se provar legítimo.

Você pode brincar e se divertir inventando! Por exemplo, muitas Swifties escolheram nomes de usuário que terminam com “swifties.social,” coisa que você pode configurar usando essa ferramenta aqui.

Caso tenha interesse em comprar e gerenciar um site através do nosso parceiro Namecheap, você pode fazer isso aqui.

Estou cansado de criar contas novas em redes sociais! É garantia que o Bluesky vai continuar vivo?

Nós sabemos, compreendemos profundamente essa sua preocupação. Mas o Bluesky está aqui para ficar.

Quando uma plataforma como o X fecha você perde contato com todos os seus amigos de lá. Mas como o Bluesky é uma rede cujo código é aberto você consegue levar seus seguidores com você. Dessa forma você sempre será capaz de manter contato com seus amigos. (caso esteja interessado em detalhes técnicos, tem mais informações sobre a portabilidade de contas aqui.)

E digo mais! Por ser uma rede social aberta, desenvolvedores independentes podem construir aplicativos inteiramente novos e promover outras experiências a vocês. Imagine uma plataforma de blog ou um aplicativo de fotos nessa mesma rede com todos os seus amigos já conectados. Você não vai precisar se inscrever em outro aplicativo social desta vez — estará criando uma identidade social online que é apenas sua.

Como o Bluesky lida com liberdade de expressão e moderação de conteúdo?

Segurança e promoção de espaços saudáveis para conversas é uma questão central para o Bluesky. Nosso time de moderação está de pé 24/7 e consegue responder a maioria das denúncias em poucos dias. Para denunciar uma postagem ou uma conta, simplesmente clique no menu indicado com três pontinhos e em “Denunciar Postagem” ou “Denúnciar Conta”.

Ao mesmo tempo, nós entendemos que não existe uma única forma que sirva para moderar todos os espaços. Então, além da base sólida em relação às políticas de moderação do Bluesky, vocês podem se inscrever em outras organizações que confiem, ou mesmo em outras comunidades que tenham algum conhecimento específico e que podem adicionar regras de moderação. (Leia mais sobre formas de acrescentar regras de moderação aqui.)

Como vocês planejam lidar com as fake news eleitorais?

Aaron Rodericks, o cabeça do time de segurança e promoção de saúde na plataforma, já teve que lidar com essas questões no Twitter e trouxe sua experiência para cá. Nosso time de moderação revisa o conteúdo ou a conta em busca de desinformação, coisa que os usuários podem denunciar diretamente do aplicativo. Em caso de violações severas como risco de boicotes a votação ou as eleições oficiais nós poderemos remover o conteúdo ou até a conta. Na maioria dos casos nós revisamos as reivindicações de que um conteúdo é falso buscando informações em fontes confiáveis e nos reservamos o direito de classificar postagens como desinformação.

Jornalistas podem entrar em contato através do press@blueskyweb.xyz. Para o nosso kit de media, que é onde você encontra nosso logo e fotos, clique aqui.


Welcome to Bluesky!

What a week! In the last few days, Bluesky has grown by more than 2.6 million users, over 85% of which are Brazilian. Welcome, we are so excited to have you here!

What a week! In the last few days, Bluesky has grown by more than 2.6 million users, over 85% of which are Brazilian. Welcome, we are so excited to have you here!

What makes Bluesky different?

By design, Bluesky gives users more control and prioritizes you. Here, you can customize your social experience to fit you.

Our community has grown organically, and is full of creators, artists, journalists, politicians, and more. Brazilian users on Bluesky have noticed that they receive much higher quality engagement on Bluesky than on any other platform.

In addition, Bluesky is an open ecosystem. We’re built on an open network that developers can freely build upon called the AT Protocol (and the ecosystem is called the Atmosphere). This openness means that Bluesky is a collaborative project, unlike other social networks that are controlled by a single company. Anyone can build feeds, moderation services, and even entirely new apps on top of our network.

What are some unique features on Bluesky? Custom Feeds

Outside of your chronological Following feed and the default Discover feed, you can try out some new feeds! Maybe you want to see posts from your friends who don’t post as often — try Quiet Posters. If you want to see the top posts across the whole network from the last day, try Catch Up.

Anyone can create and subscribe to feeds. Instead of providing only a single algorithm, we let users choose. You’re in control. This promotes healthier discussion because we do not incentivize engagement baiting, misinformation, or harassment.

Usernames

You can set your username to be a website that you own. For example, Folha de S. Paulo set their Bluesky username to @folha.com. You can only set your username to a website that you own, so this shows you that the real Folha de S. Paulo owns this account. It’s one form of self-verification.

There’s lots of room to have fun with this! For example, many Swifties are using usernames that end in “swifties.social,” which you can set up with this community tool here.

If you’d like to purchase and manage a website through Bluesky’s partnership with Namecheap, you can do that here.

I’m tired of creating accounts on new social apps! Will Bluesky stick around?

We know, we’ve been there too. Bluesky is here to stay.

When an app like X shuts down, you lose touch with all your friends there. But because Bluesky is built on an open network, you can easily take your followers with you. You will always be able to stay in touch with your friends. (If you’re interested in the technical details, you can read more about account portability here.)

Additionally, because of the open network, independent developers can build entirely new apps and experiences. Imagine a blogging platform or a photo app built on this same network, with all of your friends already connected. You’re not just signing up for another social app this time — you’re creating a social identity online that you own.

When will Bluesky have video and trending topics?

Video will be available in the next major app release, and we’re working on trending topics too. We’re paying close attention to your feedback and appreciate everyone’s excitement.

How does Bluesky handle content moderation?

Trust and safety is core to Bluesky, and we value spaces for healthy conversation. Our moderation team provides 24/7 coverage and responds to most reports within a few days. To report a post or an account, simply click the three-dot menu and click “Report post” or “Report account.”

At the same time, we recognize that there’s no one-size-fits-all approach to moderation. So, on top of Bluesky's strong foundation, users can subscribe to additional moderation decisions from more organizations they trust with industry-specific or community-specific knowledge. (Read more about our stackable approach to moderation here.)

What is your plan for election misinformation?

Aaron Rodericks, Bluesky's Head of Trust & Safety, formerly led election integrity efforts at Twitter and has brought his experience here. Our moderation team reviews content or accounts for misinformation, which users can report directly within the app. In the case of severe violations such as a risk to polling places or election officials, we may remove content or accounts. In most cases, we review claims against credible sources and fact checkers, and may label posts as misinformation.

Journalists can reach us with inquiries at press@blueskyweb.xyz. For our media kit, where you can find our logo and headshots, click here.

Tuesday, 03. September 2024

Microsoft Entra (Azure AD) Blog

MFA enforcement for Microsoft Entra admin center sign-in coming soon

As cyberattacks become increasingly frequent, sophisticated, and damaging, safeguarding your digital assets has never been more critical. In October 2024, Microsoft will begin enforcing mandatory multifactor authentication (MFA) for the Microsoft Entra admin center, Microsoft Azure portal, and the Microsoft Intune admin center.    We published a Message Center post (MC862873) to all

As cyberattacks become increasingly frequent, sophisticated, and damaging, safeguarding your digital assets has never been more critical. In October 2024, Microsoft will begin enforcing mandatory multifactor authentication (MFA) for the Microsoft Entra admin center, Microsoft Azure portal, and the Microsoft Intune admin center. 

 

We published a Message Center post (MC862873) to all Microsoft Entra ID customers in August. We’ve included it below:

 

Take action: Enable multifactor authentication for your tenant before October 15, 2024

 

Starting on or after October 15, 2024, to further increase your security, Microsoft will require admins to use multifactor authentication (MFA) when signing into the Microsoft Azure portal, Microsoft Entra admin center, and Microsoft Intune admin center. 

 

Note: This requirement will also apply to any services accessed through the Intune admin center, such as Windows 365 Cloud PC. To take advantage of the extra layer of protection MFA offers, we recommend enabling MFA as soon as possible. To learn more, review Planning for mandatory multifactor authentication for Azure and admin portals.

 

How this will affect your organization:

 

MFA will need to be enabled for your tenant to ensure admins are able to sign into the Azure portal, Microsoft Entra admin center, and Intune admin center after this change.

 

What to do to prepare:

If you have not already, set up MFA before October 15, 2024, to ensure your admins can access the Azure portal, Microsoft Entra admin center, and Intune admin center. If you are unable to set up MFA before this date, you can apply to postpone the enforcement date. If MFA has not been set up before the enforcement starts, admins will be prompted to register for MFA before they can access the Azure portal, Microsoft Entra admin center, or Intune admin center on their next sign-in. 

 

For more information, refer to: Planning for mandatory multifactor authentication for Azure and admin portals.

 

Jarred Boone

Senior Product Marketing Manager, Identity Security

 

 

Read more on this topic 

Planning for mandatory multifactor authentication for Azure and other administration portals  

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog⁠Microsoft Entra blog | Tech CommunityMicrosoft Entra documentation | Microsoft Learn Microsoft Entra discussions | Microsoft Community  

KuppingerCole

Passwordless Authentication for Enterprises

by Alejandro Leal Explore the rise of passwordless authentication, its security benefits, and how it mitigates common password-based attacks like phishing, brute-force, and ATO fraud. This Buyer's Compass can help you find the solution that best fits your business needs.

by Alejandro Leal

Explore the rise of passwordless authentication, its security benefits, and how it mitigates common password-based attacks like phishing, brute-force, and ATO fraud. This Buyer's Compass can help you find the solution that best fits your business needs.

PingTalk

Tailored Government ICAM Capabilities in FedRAMP High & DoD IL5

Ping Identity expands its FedRAMP High and DoD IL5 offerings with the addition of critical identity, credential, and access management capabilities.

If you're in the government space, you know how crucial it is to balance robust security and access with a seamless digital experience. We're thrilled to announce some major updates to Ping Government Identity Cloud that'll make your lives a whole lot easier. These capabilities are crucial for hitting essential security benchmarks, especially for DoD agencies and the Defense Industrial Base (DIB).

Monday, 02. September 2024

Dock

Dock and cheqd Form Alliance to Accelerate Global Adoption of Decentralized ID

We are excited to announce that the Dock and cheqd tokens and blockchains are merging to form a Decentralized ID alliance. By harnessing the combined strengths of two industry pioneers, Dock and cheqd will accelerate the global adoption of decentralized identity and verifiable credentials, empowering individuals

We are excited to announce that the Dock and cheqd tokens and blockchains are merging to form a Decentralized ID alliance.

By harnessing the combined strengths of two industry pioneers, Dock and cheqd will accelerate the global adoption of decentralized identity and verifiable credentials, empowering individuals and organizations worldwide with secure and trusted digital identities.

Existing $DOCK tokens will be converted into $CHEQ tokens (pending governance approval from token holders in both communities). This will mark a new chapter of opportunity for our token holders who will benefit from all the Web3 resources cheqd has at their disposal. 

Full article: https://dock.io/post/dock-and-cheqd-form-alliance-to-accelerate-global-adoption-of-decentralized-id


KuppingerCole

SOAR Platforms and Generative AI: Building an AI-Skilled Workforce

by Alejandro Leal From Luddites to AI Legend has it that in 1779, a man named Ned Ludd, angered by criticism and orders to change his traditional way of working, smashed two stocking frames. This act of defiance became emblematic of the “Luddite” movement against the encroaching mechanization that threatened the livelihoods of skilled artisans during the early Industrial Revolution. Throughou

by Alejandro Leal

From Luddites to AI

Legend has it that in 1779, a man named Ned Ludd, angered by criticism and orders to change his traditional way of working, smashed two stocking frames. This act of defiance became emblematic of the “Luddite” movement against the encroaching mechanization that threatened the livelihoods of skilled artisans during the early Industrial Revolution.

Throughout history, workers have adapted to new technologies, from the complex machinery of the Industrial Revolution to today's sophisticated AI systems. Initially, industrial workers had to master mechanical operations to support mass production. Later, the digital revolution demanded proficiency with computers for a variety of tasks.

Now, the integration of AI in workplaces emphasizes skills in managing and leveraging intelligent systems to boost productivity and decision-making processes. This ongoing evolution demonstrates the need for continuous learning and adaptability, underscoring the increasing complexity of skills involved in today’s jobs.

The Evolving Role of Cybersecurity Analysts

Building an AI-skilled workforce requires not only equipping professionals with the tools and knowledge necessary to leverage AI technologies, but also addressing the persistent challenges of the human factor in cybersecurity by implementing the right tools, cultivating a cybersecurity culture, and fostering new skills.

For example, the art of prompt engineering is a relatively new and useful skill. This discipline allows analysts to develop and optimize prompts to use Large Language Models (LLMs) efficiently. These prompts are designed to optimize the language model's performance, ensuring that it produces the desired output with minimal computational resources. For security analysts, generative AI offers a remarkable leap forward in the effectiveness of their work.

The integration of generative AI into Security Orchestration, Automation, and Response (SOAR) platforms has the potential to change the role of Security Operations Centre (SOC) analysts. This technology automates routine tasks, allowing analysts to spend more time on strategic aspects of their roles, such as planning new defensive strategies, identifying emerging threats, and formulating proactive mitigation plans.

Balancing Innovation and Responsibility

However, the potential use of generative AI goes beyond simply automating tasks or interacting with a chatbot. For instance, SOC analysts can now use generative AI to craft detailed playbooks that document the steps taken during an incident response. This documentation process not only automates responses but also builds a knowledge base that can inform future responses.

SOC analysts can also use generative AI to create alerts and perform tasks such as threat detection, incident analysis, summarize events, create reports, enhance decision making, suggest playbook templates, etc. While the integration of generative AI into SOAR platforms offers substantial benefits, there are several challenges that need to be addressed.

Generative AI requires access to vast amounts of data to learn and make decisions. Ensuring that this data is handled securely and in compliance with privacy regulations is a significant challenge. In addition, there is a risk that AI models may develop biases based on the data they are trained on, which can lead to inaccurate or unfair outcomes.

Therefore, the use of generative AI must be accompanied by thorough quality control on the part of the vendor, to ensure that the information provided is indeed useful and accurate. This balanced approach reflects a careful consideration of both the opportunities and the complexities involved with integrating new technologies into security operations.

While some vendors are highly optimistic about the transformative potential of generative AI in SOAR solutions, others remain cautious, choosing to monitor the industry's development closely. These cautious vendors prioritize understanding how to align with customer expectations and carefully evaluate the practical advantages and potential challenges of implementing generative AI.

Great Expectations

By harnessing the potential of generative AI, however, SOC analysts can broaden their scope within cybersecurity practices, cultivating new knowledge and developing new skills.  While Ludd's reaction was to destroy the machines he feared would replace human craftsmanship, the challenge now is not to resist technological advancement, but to integrate it. This approach reflects a broader trend in AI development, where the goal is not to replace human endeavor, but to augment it.

As a result, vendors should prioritize transparency in their marketing to demonstrate the practical value of generative AI, rather than relying on hype or jargon. This approach not only educates customers about the capabilities and limitations of generative AI but also helps in setting realistic expectations. For more on this, see my colleague John Tolbert's blog post on Some Direction for AI/ML-ess Marketing.

Join us in December in Frankfurt at our cyberevolution conference, where we will continue to dissect how AI is used in cybersecurity.

See some of our other articles and videos on the use of AI in security:

Cybersecurity Resilience with Generative AI

Generative AI in Cybersecurity – It's a Matter of Trust

ChatGPT for Cybersecurity - How Much Can We Trust Generative AI?

Asking Good Questions About AI Integration in Your Organization

Reflections & Predictions on the Future Use (and Mis-Use) of Generative AI in the Enterprise and Beyond


Verida

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part…

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part 3) This is the third and final post to release the “Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI” and was originally published by Chris Were, CEO and co-founder at Verida. You can catch up with Part 1 and Part 2. Confidential Compute No
Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part 3)

This is the third and final post to release the “Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI” and was originally published by Chris Were, CEO and co-founder at Verida. You can catch up with Part 1 and Part 2.

Confidential Compute Nodes

Confidential Compute Nodes running on the Verida Self-Sovereign Compute Network operate a web server within a secure enclave environment to handle compute requests and responses.

There will be different types of nodes (i.e., LLM, User API) that will have different code running on them depending on the service(s) they are providing.

For maximum flexibility, advanced users and developers will be able to run compute nodes locally, on any type of hardware.

Nodes have key requirements they must adhere to:

GPU access is required for some compute nodes (i.e., LLM nodes), but not others. As such, the hardware requirements for each node will depend on the type of compute services running on the node.

Code Verifiability is critical to ensure trust in the compute and security of user data. Nodes must be able to attest the code they are running has not been tampered with.

Upgradability is essential to keep nodes current with the latest software versions, security fixes and other patches. Coordination is required to ensure applications can ensure their code is running on the latest node versions.

API endpoints are the entry point for communicating with nodes. It’s essential a web server host operates within the secure enclave to communicate with the outside world.

SSL termination must occur within the secure enclave to ensure the host machine can’t access API requests and responses.

Resource restraints will exist on each node (i.e., CPU, memory) that will limit the number of active requests they can handle. The network and nodes will need to coordinate this to ensure nodes are selected that have sufficient resources available to meet any given request.

Interoperability and Extensibility

In order to create an efficient and highly interoperable ecosystem of self-sovereign API’s, it’s necessary to have a set of common data standards. Verida’s self-sovereign database storage network provides this necessary infrastructure via guaranteed data schemas within encrypted datasets, providing a solid foundation for data interoperability.

Developers can build new self-sovereign compute services that can be deployed on the network and then used by other services. This provides an extensible ecosystem of API’s that can all communicate with each other to deliver highly complex solutions for end users.

Figure 4: Interoperable data between self-sovereign AI services

Over time, we expect a marketplace of private AI products, services and APIs to evolve.

Service Discovery

Verida’s self-sovereign compute network will enable infrastructure operators to deploy and register a node of a particular service type. When an API needs to send a request to one of those service types, it can perform a “service lookup” on the Verida network to identify a suitable trusted, verifiable node it can use to send requests of the required service type.

User Data Security Guarantees

It is essential to protect user privacy within the ecosystem and prevent user data leaking to non-confidential compute services outside the network. Each service deployed to the network will be running verifiable code, running on verifiable confidential compute infrastructure.

In addition, each service will only communicate with other self-sovereign compute services. Each API request to another self-sovereign compute service will be signed and verified to have been transmitted by another node within the self-sovereign network.

Tokenized Payment

The VDA token will be used for payment to access self-sovereign compute services. A more detailed economic model will be provided, however the following key principles are expected to apply.

End users will pay on a “per-request” basis to send confidential queries to compute nodes and the services they operate. The cost per request will be calculated in a standardized fashion that balances the computation power of a node, memory usage and request time. Applications can sponsor the request fees on behalf of the user and then charge a subscription fee to cover the cost, plus profit, much like a traditional SaaS model.

Node operators will be compensated for providing the confidential compute infrastructure to Verida’s Self-Sovereign Compute Network.

Builders of services (i.e., AI Prompts and Agents) will be able to set an additional fee for using their compute services, above and beyond the underlying “per-request” compute cost. This open marketplace for AI Agents and other tools drives innovation and provides a seamless way for developers to generate revenue from the use of their intellectual property.

Verida Network will charge a small protocol fee (similar to a blockchain gas fee) on compute fees.

Other Use Cases Data Training Marketplaces

Verida’s Private Data Bridge allows users to reclaim their private data from platforms such as Meta, Google, X, email, LinkedIn, Strava, and much more.

Users on the Verida network could push their personal data into a confidential compute service that anonymizes their data (or generates synthetic data) which is made available to various AI data marketplaces. This provides an option for users to monetize their data, without risk of data leakage, while unlocking highly valuable and unique datasets such as private messages, financial records, emails, healthcare data for training purposes.

Managed Crypto Wallets

There is a vast array of managed wallet services available today that offer different trade-offs between user experience and security.

Having an always available cloud service that can protect user’s private keys, but still provide multiple authorization methods for a user is extremely useful to onboard new users and provide additional backup protection measures for existing users.

Such a managed wallet service becomes rather trivial to build and deploy on the Verida self-sovereign compute network.

Verifiable Credentials

Verida has extensive experience working with decentralized identity and verifiable credential technology, in combination with many ecosystem partners.

There is a significant pain point in the industry, whereby developers within credential ecosystems are required to integrate many disparate developer SDK’s to offer an end-to-end solution. This is due to the self-sovereign nature of credentials and identity solutions where a private key must be retained on end user devices to facilitate end-to-end security.

Verida’s self-sovereign compute network can provide a viable alternative, whereby application developers can replace complex SDK integrations with simple self-sovereign APIs. This makes integration into mobile applications (such as identity wallets) and traditional web applications much easier, simpler and viable.

This could be used to provide simple API integrations to enable:

Identity wallets to obtain access to a user’s verifiable credentials End users to pre-commit selective disclosure rules for third party applications or identity wallets, without disclosing their actual credentials Provide trusted, verifiable universal resolvers Trust registry APIs

Any complex SDK that requires a user’s private key to operate, could be deployed as a micro service on Verida’s self-sovereign compute network to provide a simpler integration and better user experience.

Conclusion

Verida’s mission to empower individuals with control over their data continues to drive our innovations as we advance our infrastructure. This Litepaper outlines how the Verida Network is evolving from decentralized, privacy-preserving databases to include decentralized, privacy-preserving compute capabilities, addressing critical issues in AI data management and introducing valuable new use cases for user-controlled data.

As AI faces mounting challenges with data quality, privacy, and transparency, Verida is at the forefront of addressing these issues. By expanding our network to support privacy-preserving compute, we enable the more effective safeguarding of private data while allowing it to be securely shared with with leading AI models. This approach ensures end-to-end privacy and opens the door to hyper-personalized and secure AI experiences.

Our solution addresses three fundamental problems: enabling user access to their private data, providing secure storage and sharing, and ensuring confidential computation. Verida’s “Private Data Bridge” allows users to securely reclaim and manage their data from various platforms and facilite its use in personalized AI applications without compromising privacy.

While we are not focusing on decentralized AI model training or distributed inference, Verida is committed to offering high-performance, secure, and trusted infrastructure for managing private data. We are collaborating with partners developing private AI agents, AI data marketplaces, and other privacy-centric AI solutions, paving the way for a more secure and private future in AI. This empowers users to be confident about the ways their data is used, and receive compensation when they do choose to share elements of their personal data.

As we continue to build on these advancements, Verida remains dedicated to transforming how private data is utilized and protected in the evolving landscape of AI.

You can learn more or get involved at https://www.verida.network/

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part… was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Passwordless Authentication for Enterprises

by Alejandro Leal This report provides a detailed examination of passwordless authentication technologies designed for enterprise use cases. As organizations increasingly prioritize robust and streamlined security protocols, the demand for sophisticated passwordless solutions has grown significantly. This report explores the current landscape of enterprise-focused passwordless authentication techn

by Alejandro Leal

This report provides a detailed examination of passwordless authentication technologies designed for enterprise use cases. As organizations increasingly prioritize robust and streamlined security protocols, the demand for sophisticated passwordless solutions has grown significantly. This report explores the current landscape of enterprise-focused passwordless authentication technologies and guides businesses in selecting the most effective solution to meet their security needs. By analyzing the market segment, vendor product and service functionality, relative market share, and innovative approaches, organizations can make informed decisions about their authentication strategies for their employees and systems.

Finema

This Month in Digital Identity — September Edition

This Month in Digital Identity — September Edition Welcome to the September edition of our monthly digital identity series! This month, we’re exploring the critical developments and innovative strategies that are redefining the landscape of digital identity. Here’s a closer look at the essential topics we’ll be covering: AI Enhancing Healthcare Fraud Prevention Artificial Intelligence (AI) is b
This Month in Digital Identity — September Edition

Welcome to the September edition of our monthly digital identity series! This month, we’re exploring the critical developments and innovative strategies that are redefining the landscape of digital identity. Here’s a closer look at the essential topics we’ll be covering:

AI Enhancing Healthcare Fraud Prevention

Artificial Intelligence (AI) is becoming a crucial tool in combating healthcare fraud by analyzing vast datasets in real-time to detect fraudulent activities, particularly through voice biometrics that verify patient identities and prevent unauthorized access to healthcare services. Additionally, there is a growing focus on enhancing patient experiences through digital trust technologies, such as secure digital signatures and messaging platforms, which protect patient data and streamline healthcare processes. Innovations like chip-based ID cards are also being adopted, as seen in Vietnam, to secure patient information and simplify access to healthcare services, reducing the risk of identity theft and fraud. These technological advancements collectively aim to strengthen the integrity of healthcare systems, safeguard patient data, and improve operational efficiency, ultimately enhancing the overall patient experience.

Somalia’s Financial Inclusion Drive

Somalia is advancing its digital transformation with a new Memorandum of Understanding (MoU) between the National Identification and Registration Authority (NIRA) and the Somali Banks Association (SBA) to drive financial inclusion through the national ID program. Launched a year ago, this program aims to provide the 18 million residents with a unified identity, facilitating access to banking services and aligning with global standards. The partnership seeks to enhance financial security, reduce fraud, and streamline banking processes by using the National Identification Number (NIN) for customer verification. This initiative is part of a broader effort to bolster the country’s economy, ensure compliance with international regulations, and increase public trust in financial institutions. The collaboration has been praised by key government figures and international partners, who see it as crucial for Somalia’s development. Ongoing consultations with stakeholders aim to further strengthen the national ID system, making it more impactful in supporting economic growth and modernizing financial services.

Spain’s New Age Verification System

Spain has introduced technical specifications for a new online age verification system aimed at controlling minors’ access to adult content, using W3C Verifiable Credentials (VCs) as the core technology. This approach addresses growing concerns over the negative impact of unrestricted access to adult content on the mental health and social skills of children and teenagers. By implementing W3C VCs, Spain ensures that age verification is conducted securely and privately, without disclosing personal information, thus aligning with GDPR principles. W3C VCs offer unmatched security through advanced cryptographic methods, enhanced privacy by allowing users to share only necessary information, and portability by integrating seamlessly with digital wallets. The system also follows the OpenID For Verifiable Presentations (OpenID4VP) specification, ensuring secure and private verification, and includes a trust management framework to ensure only authorized entities can issue or verify credentials, making it an ideal solution for protecting minors online.

The Digital Travel Credential (DTC)

In the realm of digital identity, numerous digital credentials are vying to replace physical documents, with the European Union’s eIDAS 2.0 and digital driver’s licenses being notable examples. However, none match the Digital Travel Credential (DTC) standard for digital trust, developed by the International Civil Aviation Organization (ICAO), which sets the universal standards for passports. The DTC, designed as the digital equivalent of a passport, offers two types: one created by a user from their physical passport and another issued directly by passport authorities. Indicio and SITA pioneered the implementation of the Type 1 DTC, which is now being adopted by countries and airlines for seamless travel. The DTC’s strength lies in its use of cryptographic verification, ensuring that passport data is securely held on a user’s device without needing to be stored in centralized databases, mitigating risks of data breaches. By scanning their passport, users can verify the authenticity of their data, bind it to their device through biometric checks, and ensure that their digital credentials are trustworthy and tamper-proof. This system provides airlines, airports, and border control with the confidence to streamline travel processes, knowing that the data in the DTC is authenticated, portable, and instantly verifiable.

We look forward to bringing you more insightful updates as we continue to explore the latest trends and innovations in the field of digital identity. Stay tuned for future editions of our monthly segment!

This Month in Digital Identity — September Edition was originally published in Finema on Medium, where people are continuing the conversation by highlighting and responding to this story.


Metadium

POSTECH Adopts Metadium Mainnet-Based Smart Student ID

POSTECH Adopts Metadium Mainnet-Based Smart Student ID Dear Community, We have some exciting news to share. Pohang University of Science and Technology(POSTECH) has adopted a blockchain-based smart student ID using Metadium’s mainnet. This significant achievement demonstrates the excellence and reliability of Metadium’s technology. Here are the unique features that make POSTECH’s smart stu

POSTECH Adopts Metadium Mainnet-Based Smart Student ID

Dear Community,

We have some exciting news to share. Pohang University of Science and Technology(POSTECH) has adopted a blockchain-based smart student ID using Metadium’s mainnet. This significant achievement demonstrates the excellence and reliability of Metadium’s technology.

Here are the unique features that make POSTECH’s smart student ID stand out:

Security and Privacy: Students’ personal information is securely protected through the Metadium mainnet, making it impossible to falsify or tamper with user information.

Convenient Use: Using blockchain-based DID authentication, users can manage their personal information and selectively submit information. Additionally, students can easily issue and use mobile student IDs remotely through their smartphones.

Efficient Management: The university can now issue mobile smart student IDs through an online automated process, in addition to plastic student IDs, enabling more efficient workflow improvements.

This case at POSTECH is an excellent example of how blockchain technology can be applied to make our lives more convenient. Our Metadium team will continue to strive for more universities and institutions to use Metadium’s technology.

We are truly grateful for the unwavering interest and support from the Metadium community. We eagerly look forward to your continued support.

Thank you.

안녕하세요, 메타디움 커뮤니티 여러분!

기쁜 소식이 있습니다. 포항공과대학(포스텍)이 메타디움의 메인넷을 기반으로 한 블록체인 스마트 학생증을 채택했습니다. 이는 메타디움 기술의 우수성과 안정성을 입증하는 중요한 성과입니다.

포항공과대학 스마트 학생증의 주요 특징은 다음과 같습니다. 안전성 및 개인정보 보호: 메타디움 메인넷을 통해 학생들의 개인정보가 안전하게 보호되어 사용자 정보의 위, 변조가 불가합니다. 편리한 사용: 블록체인 기반의 DID인증을 적용함으로써 사용자 스스로 개인정보를 관리할 수 있고 정보의 선택적 제출이 가능해집니다. 또한 비대면으로 모바일 학생증을 발급할 수 있게 됩니다. 또한 학생들은 스마트폰을 통해 비대면으로 간편하게 모바일 학생증을 발급받고 사용할 수 있게 됩니다. 효율적인 관리: 대학 측에서는 플라스틱 학생증과 별도로 스마트학생증을 온라인 자동화 업무 프로세스로 발급할 수 있게 되어 효율적 업무 개선이 가능합니다.

이번 포항공과대학의 사례는 블록체인 기술이 우리의 생활을 어떻게 더 편리하게 만드는데에 적용될 수 있는지를 보여주는 좋은 예시입니다. 저희 메타디움 팀은 앞으로도 더 많은 대학과 기관에서 메타디움의 기술을 사용할 수 있도록 노력하겠습니다.

메타디움 커뮤니티 여러분의 지속적인 관심과 지원에 감사드리며, 앞으로도 많은 성원 부탁드립니다.

감사합니다.

-메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

POSTECH Adopts Metadium Mainnet-Based Smart Student ID was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 01. September 2024

KuppingerCole

Generative AI in SOAR: Balancing Innovation and Responsibility

Generative AI is ubiquitous - anyone can use ChatGPT and other tools for free to create text, images, and more. But generative AI also has potential in the professional environment. Businesses should consider how they can leverage the use of AI with prompt engineering etc. In this episode, Alejandro and Matthias discuss the integration of machine learning and AI into cybersecurity infrastructur

Generative AI is ubiquitous - anyone can use ChatGPT and other tools for free to create text, images, and more. But generative AI also has potential in the professional environment. Businesses should consider how they can leverage the use of AI with prompt engineering etc.

In this episode, Alejandro and Matthias discuss the integration of machine learning and AI into cybersecurity infrastructures, particularly SOARs. The conversation covers the role of generative AI in changing the daily tasks of cybersecurity professionals, the challenges of integrating generative AI into SOAR platforms, the importance of prompt engineering, and the need for a balanced approach to innovation and accountability. It also addresses the security and ethical considerations of using AI in cybersecurity and the general impact of generative AI on different industries.



Friday, 30. August 2024

auth0

Deploy Secure Spring Boot Microservices on Azure AKS Using Terraform and Kubernetes

Deploy a cloud-native Java Spring Boot microservice stack secured with Auth0 on Azure AKS using Terraform and Kubernetes.
Deploy a cloud-native Java Spring Boot microservice stack secured with Auth0 on Azure AKS using Terraform and Kubernetes.

Okta Fine Grained Authorization is now Available in Private Cloud on AWS

Now, you can deploy Okta FGA in several AWS regions with high availability and requests per second.
Now, you can deploy Okta FGA in several AWS regions with high availability and requests per second.

Thursday, 29. August 2024

Spruce Systems

Why the U.S. Post Office is Key to Fighting AI Fraud

Pending legislation could transform the venerable USPS into a key player in the fight against fraud.

For years now, the United States Postal Service has been struggling to adjust to the digital world, as the decline of letter mail has left the agency’s budget in shambles. That’s a threat to the Postal Service’s role in connecting all Americans.

Fortunately, a bill under consideration in the U.S. Senate, the POST ID Act, would reinvigorate the venerable service for a new era, help improve USPS’s budget woes – and make it a powerful asset for digital security. The bill proposes using physical Post Office locations to offer real-world identity verification – verification that would, in turn, help fight fraud and disinformation online

That’s similar to the way DMV locations in states like California issue both traditional and digital driver’s licenses. But the Post Office could play a much broader role: the bill’s bipartisan sponsors, Bill Cassidy (R-LA) and Ron Wyden (D-OR), want to allow the Post Office to perform identity verifications for an array of private clients, in addition to public sector agencies it already serves. Combined with some product strategy, this new paid service could help to balance the agency’s budget as well.

This new USPS service would be an extension of the agency’s longtime work connecting people against all obstacles. Instead of refusing to stop for “snow nor rain nor heat nor gloom of night,” this new Postal Service would also be tasked with helping overcome hackers.

A Physical Network for the Digital Age

Senator Wyden was absolutely spot-on when he said that “AI deepfakes have added a whole new challenge for the most common [online identity] verification methods. The best way to confirm who someone is, is in-person verification.”

Wyden’s warning came in October of last year, and the threat of AI has only become more obvious since then. That includes a recent report that artificial intelligence was being used to create convincing fake ID cards at an unprecedented scale, and the equally concerning evolution of deepfake tools into the realm of video, allowing convincing live impersonation online.

But those tricks don’t work in the physical world. Only a real, natural human can walk up to the counter at a Post Office and seek identity verification by a fellow human. Not just physical appearance, but also biometrics like fingerprints are much harder to fake in person than online.

There are very few entities of any sort better positioned to conduct that affirmation than the U.S. Post Office. The USPS has a staggering 31,123 locations across practically every corner of America - even without including locations operated under contract. Post Offices can be found in far-flung U.S. territories like Guam, or at the far northern edge of Alaska, guaranteeing new verification services can be accessed by very nearly every American.

Once an identity is verified in person, it can be digitally recorded using new digital identity credential technology that is extremely trustworthy and secure—and even lets users verify their humanness without revealing their identity.

The Power of Cryptography

The Cassidy-Wyden bill would give the USPS new responsibilities for verifying natural humans, and the ability to serve an array of clients would create a new stream of revenue for the agency. Those verifications would then need to be represented as a trustworthy “digital credential” for users to present online. Luckily, such systems already exist, for instance, in the form of the digital driver’s license offered in California and a growing list of other states.

Trustworthy digital credentials rely on a mix of innovative encryption and widely available hardware – specifically, your mobile phone. In broad outline, a credential issuer like the DMV or Post Office would have a unique digital ‘signature’ tied to a secure computer on-site. After conducting identity verification, the USPS office would digitally sign a credential using the “secure element” chip in the recipient’s mobile phone. This credential could then be presented in a variety of contexts to help a user prove their identity.

The details of the “identity” that a user wants to prove can vary widely, and digital credentials of this sort are very flexible. A common feature of digital credentials is what’s known as “selective disclosure,” which lets a credential holder share only the minimum required information in a particular interaction. 

At its most minimal, a digital credential issued by the USPS could prove only that the holder is a real human being without disclosing any other identifying data. As laid out in a recent research paper by a coalition including researchers from SpruceID, this simple “personhood credential” could be a key element in the fight against costly identity fraud and toxic disinformation online.

Expanding the Network of Trust

The incredible omnipresence of USPS locations makes it an ideal candidate, alongside DMVs, to lead the charge for in-person identity verification and issuance. We can still think bigger, though.

Other trusted entities might be brought into the in-person verification network, expanding access and convenience even further. Candidates might include other shippers, such as UPS and FedEx, who have extensive physical networks and address and other data that can help confirm identities. In the most rural or remote parts of America, retailers might be recruited to the network, though they would require significant additional equipment and training. One benefit of allowing certified private sector participants to also provide in-person identity verification is to keep costs low for users and businesses, while incentivizing competition and innovation.

Over time, the identity verification process would also be streamlined for efficiency and convenience. One major potential efficiency would be collecting an applicant’s data online before an in-person verification session, reducing wait times and workloads. Streamlining of this sort would be particularly important since some digitally signed credentials need to be refreshed more often than conventional physical identity documents.

Offering identity verification via Post Office locations would be part of a yet more expansive system of verifications built on a shared standard for data formats, security practices, and privacy measures. The larger system that SpruceID is helping drive forward is flexible, offering various options for credential holders to choose what data they share.

But perhaps the most important yet challenging feature of this emerging system is creating broad access to in-person verification. For that, the good old Post Office will be hard to beat.

To learn more about SpruceID and our approach to fighting AI fraud, visit our website.

Learn More

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


liminal (was OWI)

Link How-To: Curate Actionable Insights and Gain a Competitive Edge with the Market Monitor™

With information overload becoming a constant challenge, quickly accessing relevant and actionable insights is essential to making informed decisions and staying competitive. The Link Market Monitor, powered by expert-in-the-loop AI technology, combines real-time data with expert analysis to cut through the noise and surface what’s important to you—and what you should do about it. By […] The pos
With information overload becoming a constant challenge, quickly accessing relevant and actionable insights is essential to making informed decisions and staying competitive. The Link Market Monitor, powered by expert-in-the-loop AI technology, combines real-time data with expert analysis to cut through the noise and surface what’s important to you—and what you should do about it. By delivering only the most pertinent market signals, it allows you to efficiently spot trends and seize new opportunities. This guide will show you how to use the Market Monitor to tailor insights to your needs, ensuring you’re always a step ahead. Step 1: Accessing the Market Monitor™ From the Dashboard: Navigate to your Link’s dashboard. Look for the Market Monitor widget, which displays recent headlines from your top monitors. Click on the widget to be taken directly to the Monitors Page. Using the Left Navigation Menu: In the platform’s main interface, locate the “Market Monitor” link in the left-hand navigation menu. Click on it to access the Monitors Page. Step 2: Setting Up Your Tailored Monitors On the Monitors Page, you’ll find a list of pre-configured monitors that align with your industry interests, such as “Emerging Technologies,” “Competitive Landscape,” or “Market Trends.” Click the “create new monitor” button to create a new monitor that meets your specific needs. Here, you can specify companies, sectors, themes, keywords, and more to tailor your monitor’s focus. Step 3: Exploring and Curating Insights Opening a Monitor: Click “Open Monitor” on any monitor card you’ve created. You’ll be directed to the Monitor Detail Page, where a curated newsfeed offers real-time insights filtered by your set criteria. Interacting with Curated Content: Scroll through the newsfeed to browse relevant articles and updates. Click on any article to open it in the reading pane, where you can explore the details. Use the filter bar at the top of the page to further refine the content within your monitor, ensuring you see only the most relevant insights. Step 4: Leveraging Expert-in-the-Loop AI for Personalized Insights The Link Market Monitor utilizes expert-in-the-loop AI technology, which combines real-time data with expert analysis to deliver personalized insights. As you interact with the monitors, the AI engine continuously learns from your preferences, fine-tuning the content it delivers to ensure it remains highly relevant to your needs. Step 5: Receiving Real-Time Alerts and Updates Set up real-time alerts to stay informed without the noise. The Market Monitor’s AI engine filters out irrelevant information, sending you only the most pertinent updates. Customize your alerts to focus on key trends, opportunities, and competitive threats, ensuring you never miss a critical development in your industry. Step 6: Sharing Insights with Your Team Collaborating on Strategies: Use the shared monitors to collaborate effectively, ensuring your team is aligned with the latest market intelligence and ready to make informed decisions.

Best Practices:

Regularly Update Your Monitors: As your business goals evolve, update your monitors to reflect new priorities and market conditions. Maximize AI Insights: Leverage the expert-in-the-loop AI to refine and improve the relevance of your insights continuously. Focus on What Matters: Use the real-time signals to stay on top of key developments, allowing you to react swiftly to market changes.

Why the Market Monitor™ is Essential for Business Leaders

Proactive Decision-Making: The Market Monitor™ equips you with the most relevant insights, empowering you to stay ahead of market trends and shifts. By providing timely, actionable information, it allows you to anticipate changes and make decisions that drive your organization forward. Enhanced Strategic Focus: As an business leader, focusing on what truly matters is crucial. The Market Monitor™ filters out irrelevant data and surfaces only the most pertinent signals, ensuring your strategic decisions are based on insights that directly impact your business objectives. Continuous Adaptation: The expert-in-the-loop AI technology behind the Market Monitor™ ensures that the insights you receive are always aligned with current market conditions. As your business environment evolves, the Market Monitor™ adapts to provide you with up-to-date, relevant information, helping you stay agile in a competitive landscape. Collaborative Insight Sharing: Effective leadership involves ensuring your entire team is aligned with the latest intelligence. The Market Monitor™ facilitates seamless collaboration by allowing you to share tailored insights across your organization, enabling informed, unified decision-making. Strategic Empowerment: In a complex and fast-paced industry, having the right information at the right time is crucial. The Market Monitor™ empowers you with the knowledge and tools needed to navigate market complexities confidently, helping you lead your organization to sustained success.

The post Link How-To: Curate Actionable Insights and Gain a Competitive Edge with the Market Monitor™ appeared first on Liminal.co.


Spherical Cow Consulting

Privacy-Enhancing Technologies: Protecting Human and Non-Human Identities

Privacy-Enhancing Technologies (PETs) are essential for safeguarding digital identities amidst increasing data breaches. They encompass tools like zero-knowledge proofs and advanced biometrics to secure both human and non-human identities in the digital space. As digital identity expands to include non-human entities, PETs are vital for ensuring privacy and security. Zero-knowledge proofs (ZKPs) e

I want to talk about PETs. No, not about my cats (though they are awesome), but about Privacy-Enhancing Technologies.

Not a day goes by without learning about another data breach that is exposing critical details about people and things online. Enter Privacy-Enhancing Technologies (PETs)—a critical component in digital security. These tools, like zero-knowledge proofs and advanced biometrics, are designed to safeguard digital identities while allowing people and things to get work done.

The rise of privacy-enhancing technologies (PETs) like zero-knowledge proofs and advanced biometrics is reshaping how we think about and manage digital identity. But what’s driving this change, and why should it matter to you, whether you’re managing user access or overseeing countless processes and APIs in the cloud?

All Identities Need PETs

Digital identity isn’t just about people anymore. Sure, your personal online identity—how you log in, interact, and transact—remains essential. But increasingly, digital identity also includes non-human entities like software processes, APIs, and entire cloud workloads. These non-human identities need the same attention to security and privacy as human ones, especially as they become more central to how businesses operate.

When I first started thinking about digital identity, it was all about ensuring the right people had access to the right resources. Today, though, we’re dealing with identities that aren’t people at all—identities that exist in the cloud, managing everything from payroll to AI model training, often without any direct human oversight or even a human-like credential. And these identities need to be just as secure, if not more so, given the scale and complexity they operate within.

Human and Non-Human Considerations

Biometrics like facial recognition and fingerprint scanning have long been used to verify human identities. There’s a lot of work in the field of biometrics, especially with concerns about deepfakes making Ye Olde Fashioned liveness detection hardly a thing. But what about non-human identities? While biometrics might not apply directly, the principles of unique identification and secure access certainly do. For instance, in a cloud environment, processes and APIs need to be uniquely identified and authorized—much like a person—but with a focus on speed, scalability, and automation.

So, two challenges: ensuring that human identities are securely managed while also creating systems that can handle the massive scale of non-human identities. Whether it’s a government-issued digital credential or a cloud-based process, the goal is the same: secure, reliable, and privacy-respecting identity management.

Addressing Privacy Concerns with Digital Credentials

Governments are moving towards digital credentials to improve security and convenience. But this shift brings new privacy challenges. For humans, the way these credentials are issued and managed has significant implications for personal privacy. PETs like zero-knowledge proofs are becoming crucial to ensure that sensitive information remains private, even when it’s used to prove identity.

For non-human identities, the concerns are different but equally important. In cloud environments, digital credentials need to be robust enough to manage the complex interactions between countless processes and APIs, all while maintaining strict access controls and minimizing the risk of breaches.

Of course, if it was easy, I wouldn’t be writing about it. Standards organizations like the IETF are trying to define what a credential should look like in a scenario where it may or may not be for a person (that’s work in SPICE). They’re also trying to define the best way to move those credentials around from one cloud service to the next, given those cloud services don’t exactly speak the same languages (that’s work in WIMSE). And these days we can’t have those conversations without considering the privacy implications of all of it.

Zero-Knowledge Proofs: PETs for All Identities

Which takes us to an area I find fascinating: Zero-Knowledge Proofs (ZKPs). ZKPs are a game-changer for both human and non-human identities. They allow for the verification of information without revealing the underlying data, making them perfect for situations where privacy is paramount. To put it another way, a ZKP will tell you that the proof is true without actually exposing any of the data that is included in the proof.  “Is this mobile driver’s license valid” becomes a question that can be answered without exposing any of the data in the mDL. It’s magic, I tell you, pure magic. (And math. Lots and lots of math.)

In the human world, this might mean you will be able to prove your identity without exposing personal details. In the non-human world, ZKPs can help secure interactions between cloud processes, ensuring that only authorized entities can access sensitive data or perform critical operations. This approach not only protects individual privacy but also bolsters the security of complex digital ecosystems.

Why aren’t ZKPs widely deployed? Because the math involved is incredible, and not all devices can actually handle the necessary computations in the time people expect their web pages to load or their APIs to run. But that’s today; tomorrow is going to be an entirely different story as hardware improves.

Visiting the PETs Shop

Technology is at the heart of these advances. From cryptography to AI, new tools are making it possible to protect digital identities against a range of threats. But with great power comes great responsibility. Whether it’s human users at risk from phishing attacks or non-human processes vulnerable to security breaches, there will never be a point where security and privacy are guaranteed. Innovation will always be necessary to get ahead of bad actors.

For human identities, this might mean adopting stronger authentication methods. For non-human identities, it could involve developing more sophisticated ways to manage and secure API interactions across multiple cloud environments. The challenge is ensuring that these technologies are both effective and adaptable, capable of protecting identities at scale.

PETs Need to be Everywhere

As digital identity continues to evolve, the line between human and non-human identities will blur further. In commerce, for example, digital identities—whether of customers or the processes serving them—are becoming central to every transaction. The transactions may trigger any number of APIs and services that go far beyond a single person’s digital identity. And since all problems have not been solved, businesses are going to have to support the innovation necessary to keep their data safe.

Wrap Up – Loving Your PETs

The future of digital identity is definitely not boring! PETs play a crucial role in shaping how we protect digital identities and are definitely worthy of some focused attention. It’s not the only piece of the puzzle in keeping our data safe, but it’s a biggy.

For tech leaders, I’m afraid you have another area of technology you need to keep on your radar. Your organization must engage in shaping privacy-enhancing digital identity solutions. Don’t just install them, think about how they meet tomorrow’s requirements. Better yet, be a part of defining tomorrow’s requirements in the standards being developed today.

For individual contributors like me, it’s crucial to stay informed. Keep up with the latest security practices, and be on the lookout for open calls for comments on the standards that impact this space. Your voice matters in shaping the standards and regulations in this space.

And if keeping track of all this sounds overwhelming, why not let someone else do the heavy lifting? Reach out to me; let’s chat about how I can help by providing regular updates and insights, tailored to your needs. You don’t have to do this alone.

The post Privacy-Enhancing Technologies: Protecting Human and Non-Human Identities appeared first on Spherical Cow Consulting.


IDnow

AML compliance in 2024: Assessing the effectiveness of AMLD6 and EU’s new AML package.

We explore the EU’s new AML package of rules and consider how it will affect the future of compliance in Europe.  Ever since the first directive to combat money laundering and the financing of terrorism was issued in 1991, the European Union has continued to improve and harmonize the legislative arsenal of its member states.  […]
We explore the EU’s new AML package of rules and consider how it will affect the future of compliance in Europe. 

Ever since the first directive to combat money laundering and the financing of terrorism was issued in 1991, the European Union has continued to improve and harmonize the legislative arsenal of its member states. 

In the space of 30 years, six dedicated Anti-Money Laundering Directives (AMLD) have been issued. The first was mainly aimed at combating drug-related offences and introduced the first KYC provisions. The 4th and 5th Directives (AMLD4 & AMLD5) brought in increased transparency obligations, including access to beneficial ownership registers and strengthening controls on virtual currency transactions. With each new iteration, the scope of protection has expanded significantly and now covers many areas, ranging from art dealing to cryptocurrency trading.  

A major development to AML controls came in May 2024 with the release of the AML package, a set of legislative proposals aimed at strengthening the EU’s AML/CFT rules. The AML package aims to close regulatory gaps, strengthen cooperation between member states and ensure uniform application of the rules across the EU.

The AML package is well on its way to become a comprehensive model for the banking industry. It offers uniformity and efficient applications of AML requirements, and the combined rule sets cover top-level economic decision making all the way to daily life for individuals. However, the legislation and regulations are often tinged with a somewhat negative reputation as their final form can stifle innovation, rather than protecting the people they claim to serve. 

Analysts and pundits commend the EU for its outreach to seek input and collaboration for new legislation, but final forms of initiatives rarely resemble the spirit in which they began. This is exemplified in the Draghi Report of September 2024 that discusses European competitiveness.

As the AML package is being finalized, there is still the opportunity for strong private sector collaboration. If done right, this brings Europe close to ‘digital first’ solutions that are standardized, scalable and competitive on a global scale.

Rayissa Armata, Director of Global Regulatory and Government Affairs at IDnow.

“This would better ensure a more level playing field for both traditional services alongside rapidly growing industries such as crypto, blockchain, and digital identity verification processes based on more secure frameworks. If such points are harmonized and implemented properly, Europe has a strong chance to be a leader in the next phase of development in the digital economy,” adds Rayissa.

Here, we explore some of the new rules and consider the effect it may have on AMLD6 and the future of compliance in Europe. 

5 new changes to AML rules and regulations in 2024.  A new European Anti-Money Laundering Authority (AMLA) has been established and will be operational in Frankfurt from 2025. With a staff of 400, it will centralize anti-money laundering efforts, coordinate national authorities and conduct cross-border investigations.  A directive which will further tighten criminal provisions and procedures that need to be adopted by member states to improve the AML/CFT regime. A regulation that will introduce harmonised rules that will be directly applicable as a regulation to combat money laundering and terrorist financing across all EU member states. Crypto-asset service providers will now be required to collect and store information on the source and beneficiary of the funds for each transaction. This rule, known as the “travel rule”, already exists in traditional finance and requires that information on the source of the asset and its beneficiary travels with the transaction and is stored on both sides of the transfer. CASPs will be obliged to provide this information to competent authorities if an investigation is conducted into money laundering and terrorist financing. This means that businesses operating in these spaces must adopt harmonized verification standards, aligning with those used by traditional financial institutions. A directive on Access to Centralized Bank Account Registers: This directive makes information from centralized bank registers available to member states. This contains data relating to the identity and location of bank account holders – through a single access point.  Regulations, directives and AMLD6 changes.

It’s important to note that there is an Anti-Money Laundering Regulation (AMLR) and Anti-Money Laundering Directives.

AMLR focuses more on regulatory and supervisory mechanisms, while directives, such as AMLD6 enhances the criminal law framework for tackling money laundering. Together, these laws are designed to increase financial transparency, make it harder to use the financial system for illicit purposes, and ensure that there is greater accountability for both individuals and legal entities involved in money laundering.

The AMLR provides a uniform set of standards directly applicable across the EU, ensuring consistency in financial and compliance procedures. AMLD6, however, allows member states some flexibility in how they apply criminal sanctions and enforcement measures, provided they align with the directive’s goals. Together, AMLR and AMLD6 form a cohesive framework within the AML Package.

AMLD6, which came into force in December 2020, has introduced several new legal provisions and expanded the list of criminal offences related to money laundering. Faced with the diversification of money laundering schemes, it now includes offences that go beyond simple financial crime. There are now 22 additional offences, including environmental crimes, tax crimes and cybercrime.  

AMLD6 also encourages member states to prosecute “facilitators” who help to carry out illegal activities. How member states should prosecute is also being revised and AMLD6 seeks to improve the deterrent effect of existing legislation by imposing tougher penalties. EU member states are now required to impose prison sentences of at least four years for serious money laundering offences, with heavier penalties for repeat offenders. Significant financial penalties are also issued (up to €5 million for individuals), to deprive the culprits of any profit derived from illicit activities. 

Another major development is the expansion of who should be held responsible for money laundering. From now on, legal entities could be liable for money laundering offences committed by their employees. Companies may also be subject to severe penalties, which could result in the company’s closure. Executives may also be held liable for money laundering offences committed within their organization as part of the EU’s plan to adopt “effective, proportionate and dissuasive criminal sanctions“.  

Recognizing the transnational challenges posed by organized crime and money laundering, AMLD6 promotes a rapid and effective exchange of information on suspicious transactions and ongoing investigations, as well as enhanced legal assistance in the collection of evidence and freezing of assets. It also promotes cooperation with specialized European agencies, such as Europol and Eurojust to facilitate the coordination of cross-border investigations. 

Finally, the legislation contains enhanced due diligence provisions for wealthy individuals with assets of more than €50 million, excluding their main residence, as well as an EU-wide limit of €10,000 for cash payments. 

The future of AML compliance. 

The implementation of AMLD6 has significant implications for businesses and financial institutions. Companies will now be required to protect themselves against compliance risks and adopt appropriate control mechanisms and systems, conduct regular audits, and raise awareness among their employees. This includes investing in advanced transaction monitoring and analysis technologies to proactively detect suspicious financial activity. These actions are necessary to protect the integrity of the company, avoid severe penalties, and maintain stakeholder trust. 

In addition, many industries that were not previously required to comply with certain AML regulations will now need to be more transparent with their transactions. For example, from 2029, top-tier professional football clubs involved in large-scale financial transactions, whether with sponsors, advertisers or in the context of player transfers, will have to comply with certain KYC rules. Like the financial sector, football clubs will have to verify the identity of their customers, monitor transactions and report any suspicious transactions to the FIUs. 

As money laundering and terrorist financing is a global problem, measures adopted at EU level must be coordinated with international measures otherwise they will have a very limited effect. The European Union must therefore continue to consider the recommendations of the Financial Action Task Force (FATF) and other international bodies active in AML/CFT. 

The new package of AML rules has now been entered into the EU’s Official Journal, which means that companies will have up to two years to implement some measures and three years for others.  

Building trust through KYC in banking. How can you set up a KYC process that satisfies your customers and meets regulatory requirements? Download now to discover: What is KYC? The importance of KYC in the banking sector Regulatory impact on KYC processes Read now

By

Mallaury Marie
Content Manager at IDnow
Connect with Mallaury on LinkedIn


liminal (was OWI)

The Increasing Role of Behavioral Biometrics for ATO Prevention in Banking

The post The Increasing Role of Behavioral Biometrics for ATO Prevention in Banking appeared first on Liminal.co.

DHIWay

Product tracking, tracing and authenticity using CORD

The post Product tracking, tracing and authenticity using CORD appeared first on Dhiway.

Issue verifiable credentials using MARK Studio

The post Issue verifiable credentials using MARK Studio appeared first on Dhiway.

Ocean Protocol

DF104 Completes and DF105 Launches

Predictoor DF104 rewards available. DF105 runs Aug 29 — Sept 5, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 104 (DF104) has completed. DF105 is live today, Aug 29. It concludes on September 5. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE 
Predictoor DF104 rewards available. DF105 runs Aug 29 — Sept 5, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 104 (DF104) has completed.

DF105 is live today, Aug 29. It concludes on September 5. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF105 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF105

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF104 Completes and DF105 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


BlueSky

Crie um Pacote Inicial!

Crie um pacote inicial hoje — convites personalizados que trazem amigos diretamente para o seu espaço no Bluesky.

To learn how to create a starter pack in English, read our guide here.

Hoje, estamos lançando os pacotes iniciais — convites personalizados que permitem que você traga amigos diretamente para o seu espaço no Bluesky!

Um exemplo de pacote inicial.

Recomende feeds personalizados e usuários para ajudar sua comunidade a se encontrar. Comece na aba Pacotes Iniciais no seu perfil do Bluesky.

O que há em um pacote inicial? Feeds personalizados. No Bluesky, você pode definir qualquer algoritmo ou tópico como sua linha do tempo principal. Exemplos incluem Postadores Quietos (posts dos seus seguidores mútuos mais silenciosos) e Colocando em Dia (posts mais populares das últimas 24 horas). Recomendações de quem seguir. Adicione suas contas favoritas e encoraje novos usuários a segui-las. Como criar um pacote inicial? Clique na aba Pacotes Iniciais. No seu perfil, ao lado das abas de mídia e curtidas, você verá uma nova aba. Crie um pacote inicial a partir do seu perfil. Crie um pacote inicial. Use nossa ferramenta de geração automática para criar um pacote inicial ou faça o seu próprio do zero! Você pode criar mais de um pacote inicial. Clique em "Faça um para mim" para obter um pacote inicial pré-preenchido com usuários e feeds personalizados sugeridos. Você pode adicionar ou remover itens desta lista. Ou clique em "Criar" para adicionar usuários e feeds ao seu pacote inicial você mesmo. Defina o nome, a descrição e os usuários e feeds recomendados do seu pacote inicial. Compartilhe seu pacote inicial! Cada pacote inicial vem com um link e um código QR que você pode compartilhar. Envie seu pacote inicial por mensagem para um amigo, compartilhe com sua rede profissional e poste em outros apps sociais! Compartilhe seu pacote inicial! Diga olá! Você será notificado sobre os usuários que se juntarem ao Bluesky através do seu pacote inicial. Quem pode usar os pacotes iniciais?

Qualquer pessoa com uma conta no Bluesky pode criar pacotes iniciais.

Se você ainda não tem uma conta no Bluesky, pode se juntar através do pacote inicial de um amigo e começar com as personalizações recomendadas por ele. Assim que estiver no Bluesky, você pode adicionar/remover essas recomendações e personalizar ainda mais sua experiência.

Se você já está no Bluesky mas quer se integrar a outra comunidade ou obter as recomendações de seu amigo, você também pode usar o pacote inicial dele para adicionar à sua experiência!

FAQ sobre Pacotes Iniciais

Quantas pessoas e feeds posso adicionar ao meu pacote inicial?

Você pode recomendar até 150 pessoas e até 3 feeds personalizados. Novos usuários terão automaticamente os feeds Seguindo e Descobrir fixados.

Como posso compartilhar meu pacote inicial com mais pessoas?

Envie um link por mensagem para seus amigos, poste sobre ele em outras redes sociais, compartilhe com sua rede profissional! Cada pacote inicial vem com uma imagem de prévia gerada automaticamente que mostra o nome do seu pacote inicial e alguns usuários sugeridos para facilitar o compartilhamento.

Como encontro mais pacotes iniciais no Bluesky?

Você pode compartilhar pacotes iniciais diretamente no Bluesky, e verá uma prévia incorporada para esses links. Atualmente, os pacotes iniciais não aparecem na busca, então para encontrar um pacote inicial, um amigo terá que lhe enviar o link ou você poderá ver a prévia incorporada dentro do app do Bluesky.

Fui adicionado como usuário recomendado no pacote inicial de alguém. Posso me remover?

Quando você bloqueia o criador de um pacote inicial, você será filtrado e removido do pacote inicial dele. Você também pode denunciar um pacote inicial para a equipe de moderação do Bluesky (veja abaixo).

Posso denunciar um pacote inicial para a equipe de moderação do Bluesky?

Sim. Você pode denunciar um pacote inicial clicando no menu de três pontos no topo do pacote inicial. A equipe de moderação do Bluesky revisará todas as denúncias e as avaliará de acordo com nossas Diretrizes da Comunidade.

Posso incluir um serviço de rotulagem no meu pacote inicial?

Atualmente, não incluímos serviços de rotulagem nos pacotes iniciais — estamos trabalhando primeiro na melhoria da descoberta desses serviços no app e na confiabilidade dos serviços.

Wednesday, 28. August 2024

Matterium

BEYOND THE OUROBOROS — Finite and Infinite Crypto

Posting on X, Ethereum founder Vitalik Buterin recently expressed his concerns about the chain’s current use case, saying, “This worries me. Because it feels like an ouroboros: the value of crypto tokens is that you can use them to earn yield which is paid for by… people trading crypto tokens”. Famously, the ouroboros is the image of a snake eating its own tail, found in cultures across the world

Posting on X, Ethereum founder Vitalik Buterin recently expressed his concerns about the chain’s current use case, saying, “This worries me. Because it feels like an ouroboros: the value of crypto tokens is that you can use them to earn yield which is paid for by… people trading crypto tokens”. Famously, the ouroboros is the image of a snake eating its own tail, found in cultures across the world from ancient times, and Vitalik has hit the nail on the head here, yes, crypto does just eat itself.

Finite Crypto is — Token trading. A one dimensional, zero sum game where anyone making money does so through someone losing money, not through creating real value. It is just shifting money about. This has only a limited lifetime before capital moves on Infinite Crypto is — Opening up crypto to real world uses. Multi-dimensional, innovative, flexible, forward looking. A non-zero-sum game. Where money is made by creating real world utility that generates true value. This has unlimited potential.

Currently what “crypto” means to most people is a finite, one-dimensional, zero sum game that is just about token trading; any “yield” a token seller gets comes at the expense of another token buyer losing money. The money just goes round in circles, crypto is not generating any new value, it’s moving value from one person to another and relies on new money coming to the market to keep making it possible for existing token holders to cash out. As with a casino, the only winner in the end is the house; whatever someone does, those gas fees still have to be paid. It is all very finite and constrained. Crypto only works because of the dollar’s weakness as it does not suffer from inflation like the dollar does, so crypto buyers try to use it as a hedge against inflation.

Ethereum has the potential to create so much more — Infinite Crypto, but isn’t really being used for anything innovative now, it’s not generating value in any real sense. Token trading is simply a way to move dollars about — token buyers spend their dollars on token, token goes up, maybe token goes down, and someone, somewhere, gains some value, then cashes their tokens out into dollars to spend it in the real world (paying those gas fees on the way). Even when token trading is done in a hundred percent legal way, it is still just moving money from losers to winners, it all just goes round in a circle and doesn’t grow — finite. At the moment, growth in crypto is mostly an illusion, it gets bigger because more retail investors put their savings in, not because crypto does something useful that increases value.

All this was neatly encapsulated, weirdly enough, by a scholar of religion named James P. Carse. He said “There are at least two kinds of games: finite and infinite” and defined them in this way: “A finite game is played for the purpose of winning, an infinite game for the purpose of continuing the play”. Currently crypto is a finite game, but crypto needs to become an infinite game, with evolving rules and boundaries, where the purpose is to keep things going and continue to create new value in as many ways as is possible. We are done with the old crypto — Infinite Crypto awaits, free of the shackles and constraints of the finite token game and open to the multiplicity of reality.

Vitalik understands this better than most and realises its implications saying, “while defi might be great it’s fundamentally capped and can’t be the thing that brings crypto to another 10–100X adoption burst.” Crypto has been around long enough that most people who feel at home with the token market as it is have already bought into it; there may be an incremental growth in numbers perhaps, but not the 10–100x step change that Vitalik sees the potential for. He can, though, see where that’s coming from — “I would love to see a story for where the yield is coming from…that’s rooted in something external”. The next step for Ethereum lies in connecting to infinite possibilities of the real world, in other words.

Crypto as it stands is playing the finite game, Infinite Crypto, is where we need to take things next, breaking out of the current doom loop of finite crypto. Infinite Crypto is where the growth is, that’s what will make sustained money for everyone. If we fail to break out of the doom loop, the capital will eventually go elsewhere and the blockchain will end up like Second Life (do any of you even remember Second Life? Second Life was the future once, long, long ago), a niche digital world with almost no impact on real life. Finite games always end, they become stagnant, innovation stops, they die.

But this is not what Vitalik and the team created Ethereum for; it was created for Infinite Crypto, it started with a vision of transforming the entire world, but it has become limited and massively inward facing, all about those finite zero sum games. You can play the casino game just as happily with Bitcoin as you can with Ethereum, if you really want to, but Vitalik and his team built Ethereum for smart contracts, and the real world is built on contracts. Find a way to enable Ethereum to streamline real world contracts through smart contracts and it starts to generate actual yield, yield for potentially everyone involved, not yield produced by taking money from losers to give to winners (and on a pretty random basis at that), there’s an infinity of opportunity for the taking.

A conservative estimate suggests that there’s half a TRILLION dollars to be gained by enabling efficiency savings in international trade and business, the kind of efficiency savings that Ethereum is eminently well equipped to provide — the International Chamber of Commerce reckon there’s $280 billion in things like import and export deals, currently encumbered with telephone book thick paper documentation (yup, they still print it all out and cart it around), then there’s $100 billion from the deregulation of US real estate commissions that open them up to innovative ways of dealing with property contracts and all that associated paperwork, not to mention real estate in the rest of the world. On top of that there’s likely to be well over a hundred billion in other savings here and there, so half a trillion is probably on the conservative side. Then there’s value-added services in the real world that could use Ethereum — it can deliver proven, valid, data for AI based searches on real estate that prevents the AI from hallucinating, for example. If there is any doubt about its veracity, the data can be checked back to the blockchain and verified.

This is all business that could be transacted over Ethereum, business with real, actual, yield, the kind of yield Vitalik means here, and it is pretty much infinite. Vitalik is reasserting the original vision, he is reminding us of the way, that this was the future once.

Ethereum has had its playpen stage, where idealistic utopians dreamed of a financial system untethered from the state and from tax, and has seen that largely swept away by ruthless speculators and, yes, outright scammers, who have turned the whole space into a dog-eat-dog wilderness (with RFK Jr we’ve seen what happens to your reputation when it’s alleged you eat dogs…. ). Now though, with Vitalik’s lead here, it’s time to grow up and grow out, to connect Ethereum to stuff that generates yield all round and use the business world to drive that 10–100x adoption that Ethereum is ripe for, Infinite Crypto. Right now, crypto risks just stalling out and senescing, it is basically on life support from people sacrificing their futures to buy a bunch of worthless shit coins, and lending on crypto assets is just a way of building up leveraged positions and instruments that have no economic fundamentals — all that technology could be doing mortgages instead; business, with actual yield.

We have every opportunity to make Ethereum economically productive in the real world without breaking the law. The future for the blockchain has never been brighter, but that future is only accessible after the scamming stops, we break out of the loop and attain the infinite.

We need to get back to the original Ethereum vision

This is good news for me. After being the Ethereum launch coordinator in 2015, I set up Mattereum in 2017 to achieve that future. Since then, we’ve been working on laying the foundations, putting the tools in place to enable Ethereum to interact effectively with the real world. We’ve sorted out the lawtech so we can make smart contracts enact real world contracts that are legally binding, and backed them with warranties that work under the 1958 New York Convention on Arbitration, so they stand up in court in any of 170 countries. We have the tools that connect Ethereum to the physical world, the tools that can be used to bring those efficiencies to world trade, that enable novel, creative business solutions to use Ethereum.

Vitalik has given us the direction, we have built the tools — together we can uncoil the snake, Infinite Crypto is within reach.

BEYOND THE OUROBOROS — Finite and Infinite Crypto was originally published in Mattereum - Humanizing the Singularity on Medium, where people are continuing the conversation by highlighting and responding to this story.


Indicio

What you need to know about Mobile Driver’s Licenses

The post What you need to know about Mobile Driver’s Licenses appeared first on Indicio.
A Mobile Driver’s License (mDLs) is a digital specification for a physical driver’s license. Given that driver’s licenses are widely used for identification, it’s likely that a digital version would enjoy similar ubiquity online. Here, we look at what exactly they are (are they verifiable credentials?) their benefits, and why they are not currently widely available.

By Tim Spring

It all starts with the International Organization for Standardization (ISO) 18013 series. In a nutshell, this document creates a common standard for international recognition of a digital driver’s license. 

The standard lays out the scope as follows:

You must use a machine to obtain the mDL.  The mDL must be tied to the mDL holder.  You must be able to authenticate the origin of the mDL data. You must be able to verify the integrity of the mDL data.

Critically, there are two things the standard does not cover:

How the holder’s consent to share their data is obtained.

Any requirements on how the mDL data is stored.

So now we know what the mDL is: it is a driver’s license that can be stored on your mobile device and is tied to you. It can be proven to be as accurate as a physical card because we can prove that it was issued by a proper authority — such as the department of motor vehicles — and prove that the integrity of the data has not been compromised.

But an mDL is not the same as a verifiable credential because the mDL data can technically be stored in a siloed database. However, a verifiable credential, which allows a person to hold their data, could absolutely fit this standard and be used to easily issue mDLs, as they meet all the other requirements laid out above. 

The benefits 

The benefit to using mDLs is similar to the benefits of using verifiable credentials. They are simple to verify and use, convenient, and often more secure than a physical document.

There are guides written on how to spot a fake ID. This is because each state has their own methods for trying to make their driver’s licenses difficult to counterfeit. An mDL offers a much simpler way to verify the identity of a person or their age for eligibility to purchase goods: all you need to do is scan the QR code and the software will tell you. You don’t need a flashlight, or to look for holograms. 

Most people also now have a mobile device that is always with them. Carrying a digital version of your driver’s license allows you to not worry about accidentally leaving your ID somewhere or needing to fish through a bag to find it, it is always at your fingertips.

Lastly, the security features of these mDLs, especially if they are created through verifiable credentials, are hard to match. If the mDL is a verifiable credential, it is essentially immune to forgery because the software can cryptographically verify the origin of the data, and there is an additional layer of security from the data being stored on the holder’s mobile device instead of a centralized database, removing the risk from data breaches. 

Why are these mDLs not commonplace?

One of the reasons why these credentials have not yet been widely adopted is that regulations have not kept up with the technology.

In the US, the REAL ID act of 2005 wasn’t updated until the end of 2020 to include permission for digital and mobile drivers licenses. But the federal government leaves the issuing of driver’s licenses to each state, meaning that the state governments also have to vote on implementation; as of August 2024, only 13 have passed legislation to start issuing mDLs. 

If they are being issued by your state they are not currently a replacement for your license, but an additional way to represent it, meaning that you will likely still have a physical license somewhere. This could be another reason that many haven’t adopted them, they see it as an add-on that is unnecessary.

It’s important to remember that this technology is still new. Many people might not understand or trust it yet, but as the world shifts to be more digital, it will be a big part of how we prove our identity moving forward. 

If you are part of an organization looking into mDL technology, or a better way to prove your identity online, Indicio can help! Get in touch with our team of experts today.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post What you need to know about Mobile Driver’s Licenses appeared first on Indicio.


Ontology

Why Elon Musk’s Support for California’s AI Bill Highlights the Need for Decentralization

As AI becomes more embedded in every aspect of our lives, the debate around California’s AI Safety Bill (SB 1047) highlights a critical issue: the risks of centralized AI control. While the bill attempts to mitigate these dangers, the real solution lies in decentralization — distributing control and ensuring that AI systems align with human values, privacy, and security. The Risks of Centralized

As AI becomes more embedded in every aspect of our lives, the debate around California’s AI Safety Bill (SB 1047) highlights a critical issue: the risks of centralized AI control. While the bill attempts to mitigate these dangers, the real solution lies in decentralization — distributing control and ensuring that AI systems align with human values, privacy, and security.

The Risks of Centralized AI

Centralized AI systems, controlled by a few powerful entities, pose significant dangers. We’ve already seen how centralized control can lead to data misuse, biased algorithms, and even AI-driven censorship. When a handful of corporations dictate the direction of AI development, the risks of abuse and manipulation skyrocket. For example, if a single entity controls the data and algorithms behind AI-driven surveillance, the potential for privacy violations and authoritarian control becomes disturbingly real.

Decentralization isn’t a buzzword; it’s the backbone of a system we can trust. Unlike centralized models that concentrate power, decentralization spreads control across a network, making it nearly impossible for any one actor to manipulate or exploit the system. Decentralized identity (DID) systems, for instance, enable individuals to maintain ownership of their digital identities. This ensures that interactions with AI are grounded in verified, user-controlled data — without the risk of breaches or exploitation by a centralized authority.

The Role of Decentralized Identity and Privacy

DIDs, like those powered by Ontology’s ONT ID, are a cornerstone of decentralized AI. In a world where AI might drive everything from financial transactions to governance, ensuring that human values and rights are upheld is critical. Decentralized systems provide a framework where proofs of identity, timestamped transactions, and zero-knowledge proofs can be securely integrated, preventing AI from being hijacked by non-human interests.

Moreover, privacy must be a cornerstone of AI development. Today’s centralized AI models often rely on vast amounts of personal data, raising serious concerns about surveillance and misuse. Decentralized approaches, powered by technologies like zero-knowledge proofs, allow for the validation of data without compromising privacy. This ensures that AI systems remain transparent and accountable, free from the risks of censorship or manipulation.

Global Context and the Future of AI Regulation

California’s AI Safety Bill is part of a growing global trend toward regulating AI. The European Union’s AI Act, for instance, introduces strict guidelines on the use of AI in high-risk areas, but it doesn’t take effect until 2025. Meanwhile, China’s approach to AI regulation is more focused on controlling and harnessing AI for state objectives, often at the expense of individual freedoms. In this landscape, decentralization offers a way to protect innovation while ensuring that AI development remains aligned with democratic values.

By contrast, decentralized AI frameworks ensure that no single entity holds too much power over these systems. They offer a pathway to develop AI technologies that are resilient, transparent, and aligned with public interests. This approach could prevent the kind of monopolistic practices that have plagued the tech industry for years, while fostering innovation in a way that centralized models cannot.

Conclusion: A Call for Decentralized Solutions

The California bill may mean well, but by doubling down on centralization, it misses the mark. We don’t need more gatekeepers; we need systems that empower individuals, protect privacy, and resist censorship. Decentralization isn’t just a technical fix; it’s a moral imperative for the AI-driven world we’re hurtling toward.As discussions around AI regulation continue, it’s clear that decentralization isn’t just a technical choice — it’s a fundamental necessity. By embracing decentralized technologies, we can build AI systems that are not only safe and trustworthy but also aligned with the principles of self-sovereignty and privacy. At Ontology, we’re committed to leading this charge, creating the frameworks that will ensure AI serves humanity — not the other way around.

Read more Ontology snippets here: https://ont.io/news/1086/The-Telegram-CEOs-Arrest-Highlights-the-Urgent-Need-for-Decentralization-and-Privacy-Protections

Why Elon Musk’s Support for California’s AI Bill Highlights the Need for Decentralization was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Indicio

DNP Launches Platform for Building Decentralized ID-based Digital Credential Issue and Verification System

DNP The post DNP Launches Platform for Building Decentralized ID-based Digital Credential Issue and Verification System appeared first on Indicio.

auth0

Identity Challenges for AI-Powered Applications

What are the Identity security challenges that developers of AI-based applications must be aware of? Let’s explore some of them.
What are the Identity security challenges that developers of AI-based applications must be aware of? Let’s explore some of them.

Trinsic Podcast: Future of ID

Karyl Fowler - From Transmute to Global Trade and the Power of Digital Identity

In this episode, I sit down with Karyl Fowler, co-founder and CEO of Transmute, a company at the forefront of integrating modern identity technology into global trade. Before founding Transmute, Karyl's work in the semiconductor and bioelectronics industries provided her with unique insights into the complexities of global supply chains. We explore a variety of topics, including: The challenges

In this episode, I sit down with Karyl Fowler, co-founder and CEO of Transmute, a company at the forefront of integrating modern identity technology into global trade. Before founding Transmute, Karyl's work in the semiconductor and bioelectronics industries provided her with unique insights into the complexities of global supply chains.

We explore a variety of topics, including:

The challenges of digitizing trade documentation and how Transmute is solving the multi-billion dollar paper problem The evolution of decentralized identity and its application to physical goods and cross-border commerce Key lessons learned from working with regulators and how Transmute has navigated the highly regulated trade industry

Karyl offers valuable perspectives on the future of trade and digital identity, making this an episode you won't want to miss!

You can learn more about Transmute on their website: transmute.industries.

Subscribe to our weekly newsletter for more announcements related to the future of identity at trinsic.id/podcast

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.


DHIWay

Dhiway makes the Finternet possible

The BIS Working Papers No. 1178 (PDF), authored by Agustín Carstens and Nandan Nilekani, introduces Finternet: the financial system for the future, which holds immense potential for the financial sector and promises a brighter future. The paper outlines a way to unlock the potential within the financial sector by enabling an architecture that draws on […] The post Dhiway makes the Finternet poss

The BIS Working Papers No. 1178 (PDF), authored by Agustín Carstens and Nandan Nilekani, introduces Finternet: the financial system for the future, which holds immense potential for the financial sector and promises a brighter future.

The paper outlines a way to unlock the potential within the financial sector by enabling an architecture that draws on the Internet, decentralization, and unbundling. Dhiway is one of the small core group of companies working on developing the concepts in the paper into a functioning system. With us on this journey are Silence Laboratories, JUSPAY, Rooba Finance, and the Solana Foundation.

At the outset there is an exposition about vision for the Finternet: multiple financial ecosystems interconnected with each other, much like the internet, designed to empower individuals and businesses by placing them at the centre of their financial lives. It advocates for a user-centric approach that lowers barriers between financial services and systems, thus promoting access for all.

The tokenization of real-world assets is an integral component of the finternet. With tokenization comes the need for a well-designed governance system built on regulatory frameworks with which the technology choices are compliant. If you still need to become familiar with asset tokenization, here is a primer written by Suraj Atreya, which is necessary reading material.

Blockchain technology is a key piece of the technology infrastructure, and it is where CORD, our Open Trust Infrastructure, fits in to enable the design of innovative applications and solutions.

The emergence of finternet is not just the blueprint for information technology architecture. It is conceptualised to unbundle the traditional, centralized financial systems using the values of innovation, transparency, enhanced security, cost efficiency and interoperability, all while being very user-centric.

Find more about the work underway at Finternetlab.io

The post Dhiway makes the Finternet possible appeared first on Dhiway.


Samagra and Dhiway come together to build a developer community for CORD.

Samagra Development Associates Private Ltd (“Samagra”), engaged in implementing Code for GovTech (C4GT) to build and sustain developer communities, has joined hands with Dhiway, a leading provider of enterprise Web 3.0 open trust infrastructure, to create communities of innovation around the open-source Layer 1 blockchain framework CORD. Dhiway and Samagra will offer structured mentorship and […]

Samagra Development Associates Private Ltd (“Samagra”), engaged in implementing Code for GovTech (C4GT) to build and sustain developer communities, has joined hands with Dhiway, a leading provider of enterprise Web 3.0 open trust infrastructure, to create communities of innovation around the open-source Layer 1 blockchain framework CORD.

Dhiway and Samagra will offer structured mentorship and outreach engagement programmes for community members to build innovative solutions to solve complex nation-scale challenges using the CORD blockchain.

This partnership will also foster engagement with industry stakeholders, government agencies and regulatory bodies to help build awareness and engagement around Open Trust Infrastructure.

Nitin Kashyap, Senior Vice President and Head of Product at Samagra stated, “India is making remarkable strides in building DPGs and DPI. As we set new benchmarks, it becomes crucial to ensure the adoption, maintenance, and sustainability of DPGs and open-source technology for the public good. Achieving population-scale impact requires a comprehensive, whole-of-system approach. Through initiatives like C4GT, we aim to unite organizations and contributors to drive this mission as a community. Our collaboration with Dhiway marks a significant step forward in strengthening this community.”

K P Pradeep, CSO at Dhiway, emphasized, “Today it is critical that developers acquire the habit, discipline and knowledge for building at scale using the CORD Blockchain framework. The multiplier effect of open standards, open source software, open protocols, and open trust infrastructure will unlock the potential to solve challenges for India and the world. Samagra’s focus on enabling DPGs that fit within a DPI complements our vision of reshaping the digital future.”

About Samagra

Samagra is a mission-driven governance consulting firm, that works exclusively with governments to transform governance. This involves working with the senior political and bureaucratic leadership of states and the Centre on deep systemic reforms, leveraging tech & data, to strengthen the state’s capacity to deliver sustainable outcomes at scale across domains like education, agriculture, skilling, employment, health and public service delivery among others.

About Dhiway

Dhiway is a trust infrastructure company reshaping the digital future through population-scale technology solutions. We enable enterprises and government agencies to address key challenges around data stores, data exchange and data assurance through the CORD Blockchain – a Layer 1 enterprise blockchain technology.

The post Samagra and Dhiway come together to build a developer community for CORD. appeared first on Dhiway.


Integra and Dhiway Partner Up to Expand Verifiable Credentialing

Integra Micro Systems Pvt Ltd (“Integra”), a leading provider of advanced technology products and solutions across sectors such as BFSI, Telecom, Government, Retail/eCommerce, Enterprise, and Airlines, has announced a strategic partnership with Dhiway, a pioneer in enterprise Web 3.0 open trust infrastructure. This collaboration aims to revolutionize the business of verifiable credentialing and dr

Integra Micro Systems Pvt Ltd (“Integra”), a leading provider of advanced technology products and solutions across sectors such as BFSI, Telecom, Government, Retail/eCommerce, Enterprise, and Airlines, has announced a strategic partnership with Dhiway, a pioneer in enterprise Web 3.0 open trust infrastructure. This collaboration aims to revolutionize the business of verifiable credentialing and drive forward application modernization efforts.

Integra’s expertise in Product and Tech Stack Development, Identity Authentication, IT Infrastructure Modernization, Application Modernization, Enterprise Automation, IT/Network Automation, Zero-Trust Architecture, Bot-AI-ML, DevSecOps, and Systems Integration will be instrumental in this joint initiative. By integrating Dhiway’s state-of-the-art Web 3.0 infrastructure, the partnership will enhance the deployment and scalability of digital credentials, streamline automation processes, and modernize infrastructure to effectively manage and verify digital trust and security. This synergy seeks to expand the acceptance network for verifiable credentials, ensuring that modern applications and systems are equipped to handle and secure digital records efficiently.

Mahesh Jain, Managing Director at Integra, stated: “Our partnership with Dhiway marks a significant step forward in our mission to modernize and secure digital ecosystems. By leveraging Dhiway’s cutting-edge Web 3.0 infrastructure, we are poised to transform the landscape of verifiable credentialing. Additionally, we intend to extend our Wallet software to support CBDC, NFTs, and Crypto, utilizing Dhiway’s robust blockchain technology. This collaboration not only enhances our capabilities in application modernization and digital trust but also aligns with our commitment to driving innovation and efficiency across industries. Together, we are setting new standards for digital identity management and trust infrastructure, paving the way for a more secure and reliable digital future.”

Satish Mohan, CEO at Dhiway, emphasized: “We are excited to welcome Integra into the Dhiway ecosystem. Our Open Trust Infrastructure, built on the foundation of Web 3.0 and state-of-the-art cryptography, has revolutionised how organisations secure and exchange data with continuous assurance. This partnership with Integra reinforces our commitment to advancing digital trust, especially within the financial sector. Together, we are poised to redefine the standards for secure and transparent digital ecosystems, delivering unparalleled value to our customers.” 

About Integra Micro Systems Pvt Ltd

Founded in 1982, Integra Micro Systems Pvt Ltd is a leader in innovative solutions for the Government, BFSI, and Telecom sectors. The company has a rich history of pioneering advancements, including being the first to port UNIX on Indian hardware, transitioning to Linux in the mid-90s, and developing the WAP stack for handheld devices. In 2007, Integra introduced the MicroATM device, revolutionizing financial inclusion in India and laying the groundwork for Aadhaar-based payment systems. Today, Integra excels in Digital Transformation, offering solutions in Enterprise Automation, Infra Modernization, Software Development, Systems Integration, AI/ML-based analytics, and advanced digital identity management, driving efficiency and progress across various industries.

About Dhiway

Dhiway is a trust infrastructure company reshaping the digital future through population-scale technology solutions. We enable enterprises and government agencies to address key challenges around data stores, data exchange, and data assurance through the CORD Blockchain – a Layer 1 enterprise blockchain technology.



The post Integra and Dhiway Partner Up to Expand Verifiable Credentialing appeared first on Dhiway.


Caribou Digital

Conjuring innovation: Tech pilots as products

A recent Forbes article claimed ‘Blockchain makes cash-based humanitarian aid secure, fast and transparent’. But how do aid professionals actually experience it? Are these claims truly being fulfilled? What impact does blockchain innovation have for organisations in practice? My latest research article (Conjuring a Blockchain Pilot: Ignorance and Innovation in Humanitarian Aid) lifts the bonne

A recent Forbes article claimed ‘Blockchain makes cash-based humanitarian aid secure, fast and transparent’.

But how do aid professionals actually experience it?
Are these claims truly being fulfilled?
What impact does blockchain innovation have for organisations in practice?

My latest research article (Conjuring a Blockchain Pilot: Ignorance and Innovation in Humanitarian Aid) lifts the bonnet on humanitarian innovation. Based on ethnographic research in Jordan, I explore what is at stake when an aid organisation experimentally applies a blockchain pilot project in refugee camps.

This innovation, I suggest, comes with a mix of genuine promise, authentic expertise, but also blind faith and strategic ignorance.

Tech pilots aren’t just designed to help people: regardless of what they achieve, they are valuable products for aid industry actors to promote.

The Blockchain Pilot

The Blockchain Pilot was introduced to replace the traditional cash-in-hand system with a blockchain-based digital wallet, integrated with biometric iris recognition. This system aimed to improve the security, speed, and transparency of aid payments while significantly reducing costs by bypassing conventional financial intermediaries. It also promised to empower Syrian refugee women by providing them with independently held digital wallets. However, a key appeal of the pilot was its potential to attract funding and boost the organisation’s reputation among donors.

How conjuring works: Ignorance in innovation

In the paper I argue that The Blockchain Pilot was ‘conjured’ as a product to be promoted to a competitive marketplace of aid donors. In social studies of capitalist markets, ‘conjurings’ are the spectacles and magical appearances that draw an audience of investors. I suggest that conjurings are not just about appearance and show. They involve key forms of ignorance: (i) confusion, (ii) illusion, (iii) disappearance, and (iv) misdirection.

i. Confusion
Aid professionals involved in the pilot expressed confusion about blockchain. Despite being expected to represent and defend the pilot, most staff had little understanding of how blockchain operated. This confusion was not unique to this organisation. The universal mystification surrounding blockchain made promotional claims about it difficult to evaluate or refute.

ii. Illusion
Blockchain was often treated as a magic technological object capable of achieving a range of desirable effects without clear explanation. Aid professionals conflated blockchain with other features of automation or digitalisation which did not actually require blockchain. ‘Digital wallet’ was a misnomer: refugees could not access the balance and transactions record on a personal device; they could not credit money, only withdraw it; they did not have custody of the wallet, the aid organisation did.

iii. Disappearance
The hierarchical design of the system meant that aid workers did not have access to the blockchain ledger. This design reinforced existing power asymmetries within the organisation and disconnected them from valuable information. Aid workers disappeared from the aid delivery process, replaced by the private companies and biometric cameras.

iv. Misdirection
Promoting The Blockchain Pilot often involved diverting attention away from its negative impacts on people. Aid organisations focused on quantitative metrics like cost-effectiveness and transaction speed, while downplaying the social and practical challenges faced by the refugees and aid workers.

Ignorance is not an insulting term denoting simply the absence of knowledge. It is actively produced, it can be both strategic and inadvertent, and it is shaped by hierarchical power relations and neoliberal business models in aid. The politics of ignorance is therefore something we need to take seriously when we analyse organisations and technological change.

This study is not just a cautionary tale for practitioners in aid. Beyond refugee camps and beyond blockchain, the conjuring of innovation products can take precedence over delivering meaningful value to the people they enrol.

Conjuring innovation: Tech pilots as products was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


Dock

A Deeper Look at Credential Monetization and Ecosystem Payments

In our 2023 Masterclass on Reusable Digital Identity, we explained how verifiable credentials simplify organizations’ processes and improve customers’ experience by making it easy to reuse trusted identity data across business partners. This led us to focus our 2024 Roadmap on creating tools to simplify the management of

In our 2023 Masterclass on Reusable Digital Identity, we explained how verifiable credentials simplify organizations’ processes and improve customers’ experience by making it easy to reuse trusted identity data across business partners. This led us to focus our 2024 Roadmap on creating tools to simplify the management of digital identity ecosystems. With the help of our early adopters who provided valuable feedback, Dock Certs now contains simple to use tools for managing the trust relationships in a custom ecosystem.

Full article: https://dock.io/post/a-deeper-look-at-credential-monetization-and-ecosystem-payments


BlueSky

New Anti-Toxicity Features on Bluesky

Trust and Safety (T&S) affects everything — from community policy and spam detection, all the way to the order that replies show up on a post. At Bluesky, the product team works hand-in-hand with T&S to design features that balance safety, ease of use, and fun.

We are publishing a series of blog posts on Trust & Safety efforts at Bluesky. This is the first in the series.

Trust and Safety (T&S) affects everything — from community policy and spam detection, all the way to the order that replies show up on a post. At Bluesky, the product team works hand-in-hand with T&S to design features that balance safety, ease of use, and fun.

In this blog, we’re taking a look at specifically toxicity (harassment, dunking, etc.) and some steps we’re taking to mitigate it from the product perspective. Be sure to update your app to the latest version (1.90) to access many of these features!

Detaching quote posts

As of the latest app version, released today (version 1.90), users can view all the quote posts on a given post. Paired with that, you can detach your original post from someone’s quote post.

This helps you maintain control over a thread you started, ideally limiting dog-piling and other forms of harassment. On the other hand, quote posts are often used to correct misinformation too. To address this, we’re leaning into labeling services and hoping to integrate a Community Notes-like feature in the future.

Note: Like blocks, quote post removals are public data. The Bluesky app won’t list all the quote post removals directly on your post, but developers with knowledge of the Bluesky API will be able to access this data.

Detaching the original post from a quote post. Hiding replies

In app version 1.90, you can now hide replies on your post. Only the original creator of the thread can hide replies. All hidden replies will be placed behind a Hidden replies screen — so they’re still accessible, but much less visible.

Note: Hidden replies – and which posts were hidden by the author – are still public data.

How to hide a reply. Priority notification filters

If you navigate to Notifications and click the Settings cog in the top right corner, you can now manage your notifications more. With the priority notifications feature, you can filter your notifications to only receive updates from people you follow. We hope this is helpful for people with large followings who are always receiving an influx of notifications, and also for people who may not have expected that their post would get so much attention.

We’ll keep tuning this feature and adding additional options for notifications.

Find the priority notifications filter setting in the Notifications tab. Changes to how replies show in timelines

Historically, in the Bluesky app, we show every reply in the Following feed. This means that every reply has the same visibility as a top-level post, which is often not a user’s intention. We’re reducing the frequency of showing replies in the Following feed to only show conversations that involve replies between at least two people you follow.

Additionally, this update should make it much easier for you to update older threads. Now, when you reply to an older thread of yours, it’ll get bumped to the top of your followers’ feeds. (You’ll no longer have to repost your own reply to surface it to your followers.) This update also prevents replies from being separated from the top-level post, making them easier to understand.

How replies are now displayed. Applying blocks to lists

Bluesky has three kinds of lists: starter packs, curational user lists, and moderation lists.

Now, when you block the creator of a starter pack or a curational user list, you’ll be filtered out of any lists they create. (Blocks still have no effect on moderation lists, because that would defeat their purpose.)

Additionally, we’re updating our policies around acceptable list titles and descriptions and will be labeling lists more aggressively. We’ll share more on this in a blog post next week from the Trust & Safety team.

Future work

Product work, especially as it relates to Trust & Safety, is always a continuous effort. We’re also making some updates on our backend infrastructure to combat ban evasion, botnets, and other forms of toxicity.

We’ll be publishing an update next week from the Trust & Safety team on some of these efforts.


TBD

Open Standards at TBD

How TBD is leveraging open standards

At TBD, we are committed to building a decentralized future where users have greater control over their data and organizations can interact in a more open, trustworthy, and secure way. Open standards are the foundation of this vision, enabling the seamless collaboration and interoperability across systems.

Everything we do at TBD is enabled and strengthened by open standards. Our most notable projects, Web5 and tbDEX, are deeply rooted in these open standards. The frameworks for decentralized identifiers (DIDs), verifiable credentials (VCs), and the protocols that facilitate their sharing form the backbone of our work.

Our Approach to Open Standards

Open standards ensure that different systems and organizations can work together seamlessly, creating a cohesive environment where data and identity can move across personal and organizational boundaries.

At TBD, we are deeply involved in several key standards bodies to ensure that the standards we rely on are robust and interoperable:

Decentralized Identity Foundation (DIF): This organization serves as an incubator for new ideas and standards related to decentralized identity. We are actively contributing to several key initiatives here, such as decentralized web nodes and trust establishment protocols.

W3C: The World Wide Web Consortium (W3C) is the authority on web standards, and we are heavily involved in their work on DIDs and VCs. W3C’s role in defining these standards is crucial for ensuring their broad adoption across the web.

OpenID Foundation: We’re also working with the OpenID Foundation to integrate their standards with VCs and DIDs. This work is focused on extending OpenID’s capabilities beyond web-based applications, making them applicable in backend services and mobile environments.

One of our main tasks is ensuring that our software aligns with these standards. Our Web5 spec and tbDEX spec are prime examples of adopting existing specifications to meet our broad interoperability needs.

Current Focus Areas

Our ongoing work in the standards space is focused on several key areas:

Interoperability: We’ve defined an interoperability profile for tbDEX, which outlines the standards we’re using and how they interact. This is a starting point for enabling seamless exchanges on the tbDEX network.

Selective Disclosure: As we look to enhance user privacy and control, we’re exploring the use of selective disclosure credentials. This allows users to share only the information necessary for a specific interaction, rather than their entire credential.

Trust Frameworks: We’re also working on establishing a trust framework that will enable different organizations to agree on legal and compliant ways to trust one another. This is particularly important for interactions on the tbDEX network, where trust is paramount.

Looking Ahead

As we advance our projects, we remain focused on refining our specifications to ensure they are well-defined, thoroughly tested, and widely adopted. This includes ongoing work on the Web5 spec, which we are continuously improving with better test vectors and more robust compliance checks.

We’re also making significant strides with our Rust Core approach, which will form the basis for many of our SDKs. This effort will allow us to support multiple languages more efficiently and ensure greater consistency across our implementations.

The work we’re doing now is laying the groundwork for a decentralized future where users have more control over their data, and organizations can interact in a more open, trustworthy, and secure way. As we move forward, our commitment to open standards will remain at the heart of everything we do.

Get Involved

If you're working on implementing verifiable credentials or DIDs, please reach out!

Join our Discord community for direct access to our team and ongoing discussions. You can also find us on Twitter @TBDevs.

We look forward to your contributions and questions!

Tuesday, 27. August 2024

Finicity

New report: Building the future of bill payments 

In today’s rapidly evolving digital landscape, consumer preferences and expectations are reshaping the way we engage with financial transactions. Choice lies at the heart of consumers’ financial lives, including how… The post New report: Building the future of bill payments  appeared first on Finicity.

In today’s rapidly evolving digital landscape, consumer preferences and expectations are reshaping the way we engage with financial transactions. Choice lies at the heart of consumers’ financial lives, including how they pay their bills — from traditional methods like checks and cards to emerging technologies like account-to-account payments.  

To understand how consumers prefer to pay their bills and why, and how they want to do so in the future, Mastercard surveyed over 2,000 consumers across the U.S. We explored the evolving landscape of consumer payment preferences, focusing specifically on the intersection of choice, convenience, and security, and how these core tenets will shape the future of bill payment.  

Explore some of the highlights of the report below or download the full report here

An overview of bill payments and preferences  

Consumers are looking for a seamless, efficient, secure way to pay their everyday expenses. The research shows that they are consistently turning to credit and debit cards, as well as options where they can pay directly from their bank accounts, like Bill Pay and ACH/e-check options.   

The most often used payment method for recurring bills is topped by credit cards at 47% followed by bill pay features through banks at 41%, debit card at 39% and ACH at 37%.  

Looking forward, respondents are inclined towards similar payment methods for future recurring bills, with credit cards and bill-pay-by-bank features leading the way. This trend underscores the reliability and trust needed for recurring expenses. 

Get all the insights by downloading the full report

Consumers are driven by choice  

Consumers want three fundamental things in their payment experiences: choice, convenience, and security, and they want payment solutions that empower these elements.   

Placing high value on having choice and flexibility in payment methods when paying their bills, an overwhelming number of respondents expect businesses to provide multiple payment options, indicating a strong demand for variety in how they pay.    

However, only 51% of respondents feel they are frequently given the opportunity to choose their preferred payment method. This suggests a sizable gap in businesses meeting these expectations consistently.  

Convenience, cost and security pave the way for open banking  

Based on the data, there is a clear opportunity for more businesses to embrace new kinds of payment methods supported by open banking technology.  

These new methods use consumer-permissioned connections to bank accounts for payment data rather than having the consumer input their card or account and routing numbers.  

The majority of consumers, across all age groups, are open to new pay-by-bank methods that would save billers money and reduce the likelihood of non-sufficient fund returns – as well as offering security, convenience, and support for consumers to manage their finances.  

Download the bill payments report to learn more about how open banking increases choice in bill payments for consumers and businesses, or head over to our open banking blog for inspirational use cases and insights. 

The post New report: Building the future of bill payments  appeared first on Finicity.


TBD on Dev.to

How Web5 and Bluesky are Building the Next Layer of the Web - A Comparative Analysis

As companies increasingly commodify our personal data and privacy breaches make headlines, many technologists are creating user-centered frameworks that empower individuals to control their digital identities and personal information. This concept, known as Self-Sovereign Identity (SSI), enables users to decide what data they share and with whom. While blockchain technology is a popular choice for

As companies increasingly commodify our personal data and privacy breaches make headlines, many technologists are creating user-centered frameworks that empower individuals to control their digital identities and personal information. This concept, known as Self-Sovereign Identity (SSI), enables users to decide what data they share and with whom. While blockchain technology is a popular choice for implementing SSI, companies like TBD are exploring (and even creating) alternative technologies to achieve these goals.

My Perspective on the State of SSI

Our efforts at TBD are part of a larger movement. In fact, there’s a consortium of tech giants and startups working together through the Decentralized Identity Foundation to establish open standards and best practices for SSI, focusing on:

Digital Identity Interoperability Data Ownership Reliable digital verification methods

The SSI industry is making tangible progress, especially in government sectors, as our technological solutions support the advent of Mobile Driver's Licenses.

However, one of my concerns with our industry is every company is implementing varied proprietary methods. Despite aiming to solve similar problems, companies are developing their own unique DID methods, wallets, and tooling. This fragmentation raises questions for me:

Can we achieve widespread adoption with disparate systems? Will the multitude of competing mechanisms overwhelm both users and developers? Will our various systems eventually work in tandem?

In November 2023, I began investigating the answers to these questions through a livestream series where I interviewed SSI experts from different companies. After conducting approximately 30 interviews, these questions remain unanswered. However, I’ve gained more in-depth knowledge about:

Key players in the SSI space Various technical approaches to implementing SSI Real-world applications of SSI Interviewing Bluesky

I most recently interviewed Dan Abramov, creator of Redux and React core team member, about his work at Bluesky and the development of Bluesky's underlying technology – Authenticated Transfer Protocol, or AT Proto for short. I learned that while TBD’s Web5 and Bluesky’s AT Proto share the vision of a decentralized and user-centric web, their approaches and underlying technologies offer a fascinating contrast. I'll examine these parallel approaches in hopes that TBD, Bluesky, and the broader community can gain valuable insights into building infrastructure for the decentralized web.

Building the Next Layer of the Web Similarities

The web as we know it today consists of physical, network, transport, application, and data layers. Instead of replacing the existing architecture altogether, AT Proto and Web5 aim to add a new layer enabling data to exist beyond individual applications. Both provide tools for developers to build apps within their respective ecosystems.

Bluesky actually serves as a reference implementation to inspire developers and showcase AT Proto's potential.

Differences

AT Proto focuses on decentralized social media, while Web5 enables developers to build any type of application, from financial tools to social media to health management. For example, I developed a fertility tracking app during a hackathon to demonstrate personal health data ownership. Additionally, at TBD, we use components of the Web5 SDK to build the tbDEX SDK, an open financial protocol that can move value anywhere around the world more efficiently and cost-effectively than traditional financial systems.

Data Portability Similarities

A common frustration with traditional web applications is that users often lose access to their data when a platform shuts down. Even if a user can export their data—say as a CSV file—it becomes static, no longer live or interactive.This data is essentially lost for most users, especially non-technical ones, as it's difficult to rebuild the ecosystem that once surrounded it. For example, moving from one social media app to another means users lose their followers, viral posts, and reputation and have to start from scratch.

Web5 and AT Proto enable users to take their data from one application to another. For example, if a user leaves Bluesky, which operates on AT Proto, they can migrate their data to another AT Proto-compatible app without losing their social connections or posts. Similarly, if an app built with Web5 were to shut down, a user could bring their data to another Web5 app.

Differences

Data portability on these platforms varies due to different data management approaches. AT Proto uses a federated model where each app operates a Personal Data Server (PDS). The PDS, typically managed by the app provider, stores all user data in a repository tied to the user’s identity. Users can move their repository—containing posts, social graphs, and more—between apps within the AT Proto ecosystem by connecting it to another PDS.

In contrast, Web5 depends on Decentralized Web Nodes (DWNs), which are personal data stores fully controlled by the user. To switch apps, users point the new application to their DWN and specify the types of data users of the app can access.

Use of W3C Standards for Authentication Similarities

Both AT Proto and Web5 leverage the W3C standard called Decentralized Identifiers (DIDs), which are globally unique alphanumeric identifiers that can move with you across different applications. This enables users to maintain their identities consistently across platforms.

While DIDs are often associated with blockchain technology, both Web5 and AT Proto implement a blockchain-less approach. For instance, Bluesky uses a custom DID method called did:plc (DID Placeholder), while Web5 employs did:dht (DID Distributed Hash Table), which anchors DIDs on BitTorrent instead of a blockchain. Learn more about TBD’s DID method here.

Differences

Many developers have told me that the way AT Proto handles authentication is what attracted them to the Bluesky, but many of them don’t even realize that they’re using DIDs under the hood. On Bluesky, users can use one of their existing domain names as their username. Bluesky verifies ownership by performing a DNS lookup to make sure the domain belongs to the user. Once verified, the domain is linked to a DID, and the user is marked as verified on their account.

Web5 also uses DIDs for authentication but in a different way. DIDs eliminate the need for usernames and passwords. Instead, you can log in directly with your DID. This is possible because, in the Web5 ecosystem, every DID has cryptographic keys that securely prove ownership.

Permission Management Similarities

Both AT Proto and Web5 offer permission management systems, but there are key differences in who can manage these permissions.

Differences

AT Proto takes an application-centric approach to permission management. Permissions are defined by applications using schemas called lexicons, which dictate the rules that the PDS follows. As a result, the extent of control users have over their data depends on the permissions set by the application.

Permission management is where Web5 shines. Users define access controls through JSON schemas called Protocols, specifying who can access specific data stored in their DWN. This is why building a fertility tracking app with Web5 was ideal for me: I could explicitly deny social media apps, marketing platforms, and retailers access to my personal health data, while allowing only my healthcare provider and partner to access it.

Special URLs for Data Access Similarities

Most web users are familiar with URLs, which serve as web addresses to retrieve data online. Similarly, AT Proto and Web5 use their specialized URLs to access data within their ecosystems.

Differences

In AT Proto, special URLs start with the prefix at:// and point to data in a user's PDS.

Example: at://alice.com/app.bsky.feed.post/1234 might reference a specific post in a user's social media feed.

In Web5, Decentralized Resource Locators (DRLs) start with the prefix https://dweb and link to data stored in a DWN.

Example: https://dweb/${did}/read/records/${recordId} allows a user to fetch a specific record from a DWN.

Learn More

While I've described some core differences between Web5 and AT Proto, there are more interesting features to explore, including how Bluesky implements algorithmic choice, how Web5 uses W3C's Verifiable Credentials to prove digital identity, and how both platforms refer to individual data pieces as "records." These topics deserve their own deep dives. For now, I encourage you to continue exploring via:

🎥 Watch: My interview with Dan Abramov explaining Bluesky’s implementation

📚 Learn: Check out my SSI expert interview series called tbdTV

🤝 Join: Build with us and join our discussions on Discord.


Spruce Systems

Meet the SpruceID Team: Bryce Einck

If you're a SpruceID client, you may know Bryce! Get to know one of our incredible Technical Success Managers.
Name: Bryce Einck
Team: Product Delivery
Based in: San Diego, CA About Bryce

I began my journey in customer service as a technician at the Apple Genius Bar, where I honed my troubleshooting and customer service skills. From there, I moved into technical operations and integration support for a healthcare all-in-one practice growth solution, where I expanded my expertise by learning PHP and working with IDEs for integration troubleshooting. I then transitioned to a Customer Success Manager and Product Deployment role at a tech startup focused on providing AI customer support solutions for e-commerce brands. In these positions, I gained experience with product deployment, Javascript, and consulting on using AI in customer service.

After a brief gap in work, I was looking for something new. I was excited to become a Technical Success Manager at SpruceID because the technology and privacy surrounding digital identity seemed challenging and important for our future.

Can you tell us about your role at SpruceID?

At SpruceID, I handle the day-to-day between Spruce and the California DMV, manage the priorities and expectations of SpruceID's deliverables, provide technical troubleshooting for any arising issues, and facilitate.

What do you find most rewarding about your job?

I enjoy being part of a process that improves and contributes features to the California DMV Wallet mobile application that benefits the digital identity community. It is fun to be on the edge of new tech and new tech that has yet to be fully standardized.

What has been the most memorable moment for you at SpruceID so far?

The opportunity to travel to Brazil, meet the team, explore new food/culture, and mix local drinks. I also love to surf, and had the opportunity to surf in Brazil as well!

How do you define success in your role, and how do you measure it?

Success in my role is achieved by positively managing expectations and delivering on what is asked for and promised. Success also means supporting my team in any way I can. Measuring success can be hard to define at a startup due to the constantly changing landscape, so I measure it by consistently delivering a high-quality product.

What is your favorite part about working at SpruceID?

I find the team incredibly smart, fun, and supportive!

Fun Facts

What do you enjoy doing in your free time? I enjoy being outdoors, but to stay active, surfing and bouldering are my go-tos year-round. All my other free time is spent with my family and friends, playing overcompetitive card/board games, and cooking.

If you could be any tree, what tree would you be and why? I would choose to be a Redwood tree. I grew up surrounded by them and have always loved how large they get, their ability to grow together in angel rings as a support system, and their fire-resistant qualities.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


KuppingerCole

NIS2 - EU Network and Information Security Directive

by Martin Kuppinger NIS2, the revised EU Network and Information Security Directive (EU 2022/2555) entered into force on January 16th, 2023. EU member states are obliged to transfer the directive into national law by October 17th, 2024. NIS2 mandates organizations to strengthen their cybersecurity posture and have proper incident handling and reporting in place. It also extends the scope very sign

by Martin Kuppinger

NIS2, the revised EU Network and Information Security Directive (EU 2022/2555) entered into force on January 16th, 2023. EU member states are obliged to transfer the directive into national law by October 17th, 2024. NIS2 mandates organizations to strengthen their cybersecurity posture and have proper incident handling and reporting in place. It also extends the scope very significantly, affecting an estimated 160,000 organizations within the EU. Thus, organizations must understand where to focus their cybersecurity investments to be prepared for NIS2.

Enhancing Security Frameworks through Zero Trust and Identity Threat Detection and Response (ITDR)

by Paul Fisher In a world that is becoming increasingly digital, it is crucial to have strong security frameworks in place. The shift towards cloud computing, remote work, and digital transformation has expanded the attack surface for organizations, making traditional security models insufficient. This KuppingerCole White Paper explores the integration of Zero Trust principles and Identity Threat

by Paul Fisher

In a world that is becoming increasingly digital, it is crucial to have strong security frameworks in place. The shift towards cloud computing, remote work, and digital transformation has expanded the attack surface for organizations, making traditional security models insufficient. This KuppingerCole White Paper explores the integration of Zero Trust principles and Identity Threat Detection and Response (ITDR) to enhance security frameworks, providing a proactive and comprehensive approach to safeguarding digital assets.

Verida

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part…

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part 2) This is the second of three posts over the next three weeks to release the “Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI” and was originally published by Chris Were, CEO and co-founder at Verida. Part 1 is here. Confidential Compute A gr
Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part 2)

This is the second of three posts over the next three weeks to release the “Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI” and was originally published by Chris Were, CEO and co-founder at Verida. Part 1 is here.

Confidential Compute

A growing number of confidential compute offerings are being offered by the large cloud providers that provide access to Trusted Execution Environments (TEEs). These include: AWS Nitro, Google Confidential Compute and Azure Confidential Compute. Tokenized confidential compute offerings such as Marlin Oyster and Super Protocol have also emerged recently.

These compute offerings typically allow a container (such as a Docker instance) to be deployed within a secure enclave on secure TEE hardware. The enclave has a range of verification and security measures that can prove that both the code and the data running in the enclave is the code you expect and that the enclave has been deployed in a tamper-resistant manner.

There are some important limitations to these secure enclaves, namely:

There is no direct access available to the enclave from the infrastructure operator. Communication occurs via a dedicated virtual socket between the secure enclave and the host machine (*). There is no disk storage available, everything must be stored in RAM. Direct GPU access is typically not available within the secure enclave (necessary for high performance LLM training and inference), however this capability is expected to be available in early 2025.

(*) In some instances the infrastructure operator controls both the hardware attestation key and the cloud infrastructure which introduces security risks that need to be carefully worked through, but is outside the scope of this document.

The Verida network is effectively a database offering high performance data synchronization and decryption. While secure enclaves do not have local disk access (by design), it is possible to give a secure enclave a private key, enabling the enclave to quickly download user data, load it into memory and perform operations.

While enclaves do not have direct access to the Internet, it is possible to facilitate secure socket connections between the host machine and enclave to “proxy” web requests to the outside world. This increases the surface area of possible attacks on the security of the enclave, but is also a necessary requirement for confidential compute that interacts with other web services.

It is critical that confidential AI inference for user prompts has a fast response time to ensure a high quality experience for end users. Direct GPU access via confidential compute is most likely necessary to meet these requirements. Access to GPUs with TEEs is currently limited, however products such as the NVIDIA H100 offer these capabilities and these capabilities will be made available for use within the Verida network in due course.

Self-Sovereign Compute

Verida offers a self-sovereign compute infrastructure stack that exists on top of confidential compute infrastructure.

Figure 1: Self-Sovereign Compute Architecture

The self-sovereign compute infrastructure provides the following guarantees:

User data is not accessible by infrastructure node operators. Runtime code can be verified to ensure it is running the expected code. Users are in complete control over their private data and can grant / revoke access to third parties at any time. Third-party developers can build and deploy code that will operate on user data in a confidential manner. Users are in complete control over the compute services that can operate on their data and can grant / revoke access to third parties at any time.

There are two distinct types of compute that have different infrastructure requirements; Stateless Confidential Compute and Stateful Confidential Compute.

Stateless (Generic) Confidential Compute

This type of computation is stateless, it retains no user data between API requests. However, it can request user data from other APIs and process that user data in a confidential manner.

Here are some examples of Generic Stateless Compute that would operate on the network.

Figure 2: Verida Personal Data Bridge

Private Data Bridge facilitates users connecting to third-party platform APIs (ie: Meta, Google, Amazon, etc.). These nodes must operate in a confidential manner as they store API secrets, handle end user access / refresh tokens to the third-party platforms, pull sensitive user data from those platforms, and then use private user keys to store that data in users’ private databases on the Verida network.

LLM APIs accept user prompts that contain sensitive user data, so they must operate in a confidential compute environment.

AI APIs such as AI prompt services and AI agent services provide the “glue” to interact between user data and LLMs. An AI service can use the User Data APIs (see below) to directly access user data. This enables it to facilitate retrieval-augmented generation (RAG) via the LLM APIs, leveraging user data. These APIs may also save data back to users’ databases as a result of a request (i.e., saving data into a vector database for future RAG queries).

See “Self-Sovereign AI Interaction Model” from Part 1 for a breakdown of how these generic compute services can interact together to provide AI services on user data.

Stateful (User) Confidential Compute

This type of computation is stateful, where user data remains available (in memory) for an extended period of time. This enhances performance and, ultimately, the user experience for end users.

A User Data API will enable authorized third party applications (such as private AI agents) to easily and quickly access decrypted private user data. It is assumed there is a single User Data API, however in reality it is likely there will be multiple API services that operate on different infrastructure.

Here are some examples of the types of data that would be available for access:

Chat history across multiple platforms (Telegram, Signal, Slack, Whatsapp, etc.) Web browser history Corporate knowledge base (ie: Notion, Google Drive, etc) Emails Financial transactions Product purchases Health data

Each of these data types have different volumes and sizes, which will also differ between users. It’s expected the total storage required for an individual user would be somewhere between 100MB and 2GB, whereas enterprise knowledge bases will be much larger.

In the first phase, the focus will be on structured data, not images or videos. This aligns with Verida’s existing storage node infrastructure that provides and aids the development of a first iteration of data schemas for AI data interoperability.

The User Data API exposes endpoints to support the following data services:

Authentication for decentralized identities to connect their account to a User Data API Node Authentication to obtain access and refresh tokens for third-party applications Database queries that execute over a user’s data Keyword (Lucene) style search over a user’s data Vector database search over a user’s data Connecting Stateful Compute to Decentralized Identities

Third party applications obtain an access token that allows scoped access to user data, based on the consent granted by the user.

A decentralized identity on the Verida network can authorize three or more self-sovereign compute nodes on the network, to manage access to their data for third-party applications. This is via the serviceEndpoint capability on the identity’s DID Document. This operates in the same way that the current Verida database storage network allocates storage nodes to be responsible for user data.

Secure enclaves have no disk access, however user data is available (encrypted) on the Verida network and can be synchronized on demand given the appropriate user private key. It’s necessary for user data to be “hot loaded” when required which involves synchronizing the encrypted user data from the Verida network, decrypting it, storing it in memory and then adding other metadata (i.e., search indexes). This occurs when an initial API request is made, ensuring user data is ready for fast access for third-party applications.

After a set period of time of inactivity (i.e., 1 hour) the user data will be unloaded from memory to save resources on the underlying compute node. In this way, a single User Data API node can service requests for multiple decentralized identities at once.

It will be necessary to ensure “hot loading” is fast enough to minimize the first interaction time for end users. It’s also essential these compute nodes have sufficient memory to load data for multiple users at once. Verida has developed an internal proof-of-concept to verify the “hot loading” concept with user data will be a viable solution.

For enhanced privacy and security, the data and execution for each decentralized identity will operate in an isolated VM within the secure enclave of the confidential compute node.

Stay tuned, the third and final release of the Litepaper will be made available next week.

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part… was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


MyDEX

What we do: Identity as a Service

This blog is fourth in a series explaining how Mydex’s personal data infrastructure works. It explains how our platforms help deliver our mission of empowering individuals with their own data: how it enables them to use this data to manage their lives better and assert their human rights in a practical way on a daily basis. Blogs in this series are: What IS a Personal Data Store?

This blog is fourth in a series explaining how Mydex’s personal data infrastructure works. It explains how our platforms help deliver our mission of empowering individuals with their own data: how it enables them to use this data to manage their lives better and assert their human rights in a practical way on a daily basis.

Blogs in this series are:

What IS a Personal Data Store? Personal Data Stores and Data Sharing Connecting ‘data about me’ to the world around me Identity as a Service

Thirty years ago, when the Internet was still a new thing, a joke started doing the rounds. “On the internet,” it said, “nobody knows you’re a dog”.

It was a flippant comment but it was also amazingly prescient. This issue of knowing who the other person is at the end of the line, has continued to dog the provision of digital services ever since.

When you see a friend or family member in the street you can recognise them instantly. In that instant, your brain processes dozens of cues relating to their facial features and expressions, their voice, size and weight, gait, mannerisms and gestures, so that you ‘just know’ it’s them. It does these things so fast and accurately that it seems incredibly simple. But it is not, as robotics and AI practitioners have discovered to their cost over many decades.

None of the cues that our brains process so brilliantly are available when you deal with another person remotely, online. Hence that early Internet joke.

For a society and economy that does more and more things online, this is incredibly important. It’s not just about fraud, though that is a big and ever-present danger. It’s also about simple practicality, efficiency and quality. If people and organisations want to do business with each other online, they need to be able to recognise one another. The whole issue of online or ‘digital identity’ is a sine qua non of all online service provision: without being able to recognise people when they sign up to and use an online service it’s impossible for that services to operate.

Mydex personal data stores are helping to solve this problem, in two ways.

Two meanings of ‘identity’

Before we go any further, there’s one big source of confusion that we need to address. In the context of online interactions and transactions the term ‘digital identity’ is commonly used to mean two very different things. In many conversations and debates, people move seamlessly from one of these meanings and back again without even realising they’re doing it. The result is endless confusion.

One of these meanings is knowing (or at least being pretty confident) that the person (or organisation) that you are dealing with is who they say they are. This is the whole area of identity assurance (sometimes called identity verification). Like all those cues of sight, sound and behaviour that we use to recognise our friends and family, this can involve gathering quite a lot of information about the person and ‘binding’ it to them. So, for example, if you know their name and address and age and that they have this passport number and that driving licence number and so on, the more bits of information you have about them, the more confident you can be that they are who they say they are.

The second meaning of identity is more mundane and administrative, but perhaps even more important. It’s about simply recognising them when they turn up at your front door — when they log in to a website or app for example. This, we call identity authentication.

The two may be connected. For example, a bank might go through a process of identity assurance when first providing them a customer with a bank account. At this stage the bank needs to have lots of details about who the person is. But once that process is complete, all the bank needs to do is recognise that customer when they return to use the service by, for example, use of a username and password and/or other authentication steps. This is the identity authentication bit.

On the other hand, identity assurance and identity authentication might not be connected at all. With some types of service, say when you are subscribing to a newsletter, the service provider doesn’t really need to know who the person is at all. All they need to know is if it’s the same person returning to use that service. In this case, the person could just as well use an invented name such as Mickey Mouse, along with a password like M-Mouse and it wouldn’t really matter. The service could still operate.

Once the ‘relying party’ (the party using the authentication) knows that the person is using the same identifiers, they can then map their activities, records, specific preferences etc to that individual, for their use of the service, without necessarily knowing who they actually are.

Mydex’s role in identity

Mydex’s personal data store infrastructure makes a fundamental contribution to both types of identity challenge. By enabling individuals to amass large quantities of verified attributes (sometimes referred to as verified credentials) about themselves, and to share these verified attributes easily, quickly and safely, our personal data stores go a long way to solving the problem of identity assurance and verification, without the need for privacy invading processes such as ‘identity cards’. You can see more detail about what we do on this front here.

However, the focus of this blog is on the second, practical, administrative matter of identity authentication — what all of us have to do many times a day when logging in to different types of online service.

Here, the current state of play is … a complete mess.

It grew into this mess quite naturally. First off, in the very earliest days of online services, service providers had to recognise customers when they logged in, used and returned to the service. So they invented the username and password.

It’s a pretty neat solution, except for one thing. Every different organisation created its own bespoke process for recognising people when they use a service, requiring individuals to invent (and remember) hundreds or perhaps thousands of different usernames and passwords. (Or, for the sake of convenience, they could use just one username and password, in which case if they ever got hacked the hacker would have access to every single service they had ever used).

This organisation-centric ‘bespoke solution’ to identity authentication multiplied costs and complexity for both people and service providers many times over. Most service providers had no desire to be in ‘the username and password business’ but took it on simply because they had to. It was a cost of doing business.

Then, monopolist digital platforms like Google and Facebook spotted a market opportunity. “If you log in to our service we can use the credentials we have created for you to log you on to other services!” In this way, individuals didn’t have to remember hundreds and different usernames and passwords, and service providers could get out of having to manage their username and password business. How convenient! Social sign-in was born.

On the surface, it looked like an ideal win-win. But there was only a drawback to this ‘solution’ and it is an ABSOLUTELY HUGE drawback. It delivers privacy ‘bleed’ on a gargantuan scale. By letting the digital monopolists provide ‘social sign-in’ services, individuals effectively give them permission to track their movements across their entire internet, gathering data about everything they do online — all to further concentrate power and profits in the hands of these monopolists.

Social sign-in is one of today’s volcano issues and scandals, just waiting to blow up as and when people begin to realise just how deeply invasive and pervasive and exploitative it is — all to escape the inconveniences and costs created by the first faulty attempts to solve the identity authentication problem in an organisation-centric way.

Where Mydex fits in

With Mydex’s Identity (authentication) as a Service (IDaaS) the core idea of social sign-in (e.g. only having to log in once to access many different services) is still achieved but without any privacy bleed. In fact, the goal of a single log-in is achieved while enhancing individuals’ rights and control.

It works like this. When an individual gets their personal data store they set up a username and password by which Mydex can recognise them when they log-in (i.e. no different to any other service provider). They have this for life. Then, once the individual is logged in to Mydex they can use Mydex’s connections with other services that are connected to Mydex to automatically log in to those services too.

This means that individuals can flow from one service to another without ever having to log in to these other services — because all the handshakes are working for them, automatically, behind the scenes, not getting in the way of what they are trying to do.

But this time, there is no data surveillance. Mydex is not tracking the individual anywhere. It is not collecting any information about where they go or what they do online. It is simply using the fact that it has established a secure connection with another service to open a gate and let the individual through, if and when they want to pass through that particular gate (i.e. to that particular service).

Service providers can still minimise their involvement in the username and password service but with an added benefit that, in using Mydex’s IDaaS they are not handing over oodles of data about their customers to Silicon Valley digital monopolies. Any data generated by the transaction or interaction goes into just one of two places: into relying parties’ own systems or into the individual’s personal data store. Never to a third party, including Mydex. That’s because Mydex cannot see any of the data that goes into the individual’s personal data store as explained here.

The result is that both sides benefit from both convenience and efficiency and added safety. Why added safety?

Originally, identity authentication systems were established by organisations to protect their own digital front doors. They were designed to protect the safety of the organisation, not the individual. The Mydex approach is designed to help individuals protect their digital front doors. It’s about empowering citizens with agency; with the information services they need to make their way efficiently and effectively within a complex world of service provision.

Because data about interactions is stored in the individual’s PDS, every time the Mydex ID is used it creates a log which the individual can inspect. For example, it could alert them to the fact that somebody has tried to use their ID to log-in to a service. In this way, the individual gets an audit trail of every use of their Mydex ID. This information is held in their PDS for their use alone, away from prying eyes — information that is NOT handed over to the likes of Google or Facebook.

Just to emphasise: This is data that Mydex itself cannot access because each individual has their own private encryption key to their own PDS. This means that while Mydex holds the data (in encrypted form) in its systems it cannot actually ‘see’ its content.

Extra added value

The above provides a simple summary of Mydex’s Identity as a Service model. But there is more to this simple service than meets the eye.

First, individuals can increase the security of their interaction if they want to, by adding in extra layers of security. They can, for example, require a ‘multifactor authentication process’ whereby an additional piece of information is used to authenticate their identity. This could be a one time code sent to their phone, an email, or from an authenticator app.

Second, The individual can also add other identifiers like email addresses and mobile numbers to their MydexID to protect them from use by anyone else. Registering multiple email addresses and mobile numbers also allows the individual to select any of these alongside their core MydexID itself to login, because they are all linked together. This delivers greater security and protection and also overcomes those issues where people lose access to an email or mobile number. Now they always have back-up routes for accessing their MydexID and linked services.

Third, individuals can set preferences about where notifications may be sent to them, for example a specific email address, a mobile number, or both. Each person has different ways they prefer to get notifications. This gives them the ability to make that choice independently of any relying party (service provider).

This is NOT about giving service providers the power to create hoops for individuals to jump through. It’s about enabling individuals to add extra layers of security if and when they feel they need to. It’s about putting the individual in control.

Fourth, there may be occasions when an individual wishes to log in to a service provider (such as a researcher or survey outfit) where they share information about themselves but want to do so anonymously. They can use their Mydex ID to do this. This is because, along with the Mydex ID comes what we call a ‘universal unique identifier’ (UUID) which hides their Mydex ID and contact details from the service provider.

This UUID acts like a wrapper that hides what is inside. It provides the same guarantees as those provided by the username and password but without actually providing these actual identifiers. It can be used by the service provider to recognise that it is the same person returning to the service without actually knowing who that person is.

This enables researchers who want to participate and work with someone over a period of time to see changes in their behaviours/life without actually knowing who they are. And it enables individuals to participate in such research, safely and securely.

Fifth, the system allows identity authentication to work ‘in reverse’ where, if they have already signed in to a service that’s connected to the Mydex IDaaS, individuals can use the fact that they have logged in to this service to also log in to their personal data store (PDS). There, they can add and update data and manage their preferences, including things like adding more Multi Factor Authentication Options and approving connections between their PDS and subscribers adding data.

Further Benefits

Service providers further benefit in a number of ways. As well as not having to operate their own username and password business, they can use the Mydex ID to connect to the individual’s personal data store (if the individual wants them to connect). This opens the door to safe, secure, permissioned, two way data sharing.

For example, if the individual already holds a profile about themselves in their PDS — a profile containing data usually held in a service provider’s ‘My Account’ functionality — then the individual can simply click a button to provide that information to the service provider. No more having to fill in online forms!

This makes the process of onboarding onto a new service much easier, quicker and safer, especially for smaller organisations.

Service providers can also trigger multi-factor authentication processes if they require it — as do most banks for example. In particularly sensitive situations, it is also possible to create unique identities that only work for that particular transaction and cannot be reused once that transaction has been completed.

Conclusion

Thirty years ago, it was a joke that people didn’t know who they were dealing with when interacting online. Today, it’s no longer a joke. It’s a massive cost and hassle for millions of people and organisations alike. These costs and inconveniences are being gamed and abused to an absurd extent by both frausters and monopolists.

But there are ways to solve this problem safely and efficiently. And Mydex has found a way to do just that.

What we do: Identity as a Service was originally published in Mydex on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 26. August 2024

Ontology

The Telegram CEO’s Arrest Highlights the Urgent Need for Decentralization and Privacy Protections

​​The recent arrest of Telegram’s CEO Pavel Durov at a Paris airport is more than just a headline; it’s a stark reminder of the escalating global crackdown on privacy-centric platforms. Durov, who has championed digital freedom, is now facing serious allegations that his platform has been used for illegal activities ranging from money laundering to child exploitation. But beneath these charges lie

​​The recent arrest of Telegram’s CEO Pavel Durov at a Paris airport is more than just a headline; it’s a stark reminder of the escalating global crackdown on privacy-centric platforms. Durov, who has championed digital freedom, is now facing serious allegations that his platform has been used for illegal activities ranging from money laundering to child exploitation. But beneath these charges lies a broader, more urgent issue — the clash between centralized control and the fundamental need for decentralization, censorship resistance, and privacy in our digital lives.

Telegram, like many centralized platforms, operates in a gray area where user privacy is at odds with government demands for access and control. This arrest underscores the vulnerabilities of centralized systems — where a single point of failure, like Durov’s arrest, can jeopardize the entire platform and its user base. The incident raises critical questions: How much control should governments have over communication platforms? And, more importantly, how can we safeguard individual privacy in an increasingly surveilled world?

Decentralized systems offer a compelling solution. Unlike traditional platforms, they are not controlled by any single entity, making them inherently resistant to censorship and external pressure. A decentralized messaging app, for example, would not have a CEO who could be arrested, nor would it have servers that could be easily seized. This structure ensures that users maintain control over their data and communications, rather than relinquishing it to a central authority.

Moreover, decentralized identity (DID) plays a crucial role in this landscape. DID allows individuals to own and control their identities across different platforms without depending on a centralized authority. This is essential in preventing the misuse of personal data and ensuring that privacy remains intact, even if one platform is compromised. In an era where governments and corporations alike are vying for more control over digital spaces, the protection offered by DID is invaluable.

The implications of Durov’s arrest go beyond Telegram. It signals the growing pressure on privacy-focused platforms and the need for a shift toward decentralization. As governments increase their grip on digital communications, the only sustainable path forward lies in systems that are beyond their reach — systems that prioritize individual autonomy, censorship resistance, and privacy. The rise of decentralized identity technologies is not just timely; it’s necessary for preserving the freedom that centralized platforms can no longer guarantee.

In conclusion, Durov’s arrest is a wake-up call. It underscores the fragility of centralized systems in the face of authoritarian pressure and the critical need for decentralized alternatives that respect and protect our privacy. As the battle over digital freedom intensifies, decentralization and decentralized identity will be key to ensuring that the internet remains a space for free and open communication, untainted by the heavy hand of censorship and control.

The Telegram CEO’s Arrest Highlights the Urgent Need for Decentralization and Privacy Protections was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Spherical Cow Consulting

Digital Identity in the Age of AI: Challenges and Opportunities

AI is revolutionizing digital identity, enhancing security and efficiency across various industries. Adaptive authentication, powered by AI, assesses real-time access risk, reducing cumbersome password prompts for users and bolstering security for companies. However, this reliance on AI for authentication raises privacy concerns due to extensive data access. Moreover, the use of AI for malicious p

Yes, AI is everywhere. And yes, that means it is having an impact (one that will only grow) on the digital identity space. And like most other transformative technologies, the impact will be incredibly positive … and also something to be very concerned about. Now that the paper led by OpenAI asking policymakers, technologists, and standards bodies to think about how to develop mechanisms to identify whether an entity online is a person or an AI (I had a small part in that paper), the whole AI and identity is back at the forefront of my brain.

How AI is Changing Digital Identity Security

As our online identities grow more complex, artificial intelligence (AI) is playing a bigger role in keeping them safe. Organizations use AI to spot all sorts of nefarious activities and protect personal information by analyzing patterns and catching anything out of the ordinary. (Which makes me ask, “what is ordinary and who defines it?” I’d love to have that conversation sometime over beverages.)

AI isn’t just for tech giants—industries like banking and e-commerce are using it to prevent fraud and verify identities. For example, in banking, AI can track transaction habits to flag anything unusual, potentially stopping fraud before it happens. In online shopping, AI helps confirm who you are during transactions, cutting down on the risk of identity theft.

What is Adaptive Authentication?

Adaptive authentication is changing how we verify digital identities. Instead of relying on passwords, this method uses AI to evaluate the risk of an access request in real time. It looks at factors like where the request is coming from, what device is being used, and what time it is.

This approach has big benefits. For users, it means fewer annoying password prompts. For companies, it means stronger security because the system can adjust the level of authentication needed based on the perceived risk. All good stuff, until you look at the amount of data AI must access in order to make these determinations. Privacy advocates have a lot to say about this.

The Challenges of AI in Digital Identity

So let’s talk about the privacy aspects for a moment. While AI offers new ways to secure digital identities, the ramifications when it comes to privacy are huge. AI systems need a lot of data to work effectively, and this raises questions about how that data is collected and used.

Another concern is the potential for AI to be used in malicious ways, like creating deepfakes—fake media that looks real but isn’t. This technology could be used to create false digital identities, making it harder to tell what’s real online.

The European Union’s AI Act tackles the issues of where and how AI might be used, and is the first comprehensive regulation in the world on the subject. But, being the first, there are still significant concerns about whether it is enough. The rest of the world is watching to see what works, what doesn’t, and what they can take away from the effort for their own regulations.

AI’s Role in Different Industries

AI-driven digital identity tools are being used in many sectors, each with unique challenges and applications:

Finance: AI helps detect fraud faster and more accurately by analyzing years of transaction data to spot suspicious patterns. Healthcare: Digital identity is crucial for protecting patient privacy and streamlining services. AI helps verify identities and manage access to sensitive medical records, ensuring secure and personalized care. E-commerce: Online retailers use AI to prevent identity theft by analyzing shopping patterns. AI can flag unusual transactions that may indicate fraud, protecting both the customer and the retailer.

Is there an industry that AI won’t touch? If that industry has any kind of online presence, then I’d say no, probably not.

The Global View: Working Together on AI and Digital Identity

Digital identity challenges aren’t confined to one country—they’re global. Just like when thinking about the Internet, commerce, and human migration, geopolitical boundaries are just another consideration when it comes to digital identity. I’ve already mentioned the EU’s AI Act. If you’re following this space at all, you should also be aware of the OECD’s AI Principles, initially published in 2019 and updated earlier this year (May 2024). If you’re in the US, you really need to check out the Executive Order President Biden’s administration posted in October 2023, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”

It’s always fascinating (and a little scary) when technology outpaces the law. Of course, it’s not all that great when the law outpaces technology and starts to make stuff up about what’s possible. If it wasn’t my digital identity and that of my 8 billion fellow humans, I’d heat up some popcorn and watch the demolition derby that is technology standards and regulations.

Wrap Up

So, yup, AI is having a big impact on digital identity. It’s making things safer, improving user experiences, and helping industries operate more efficiently. But with these benefits come challenges, especially around privacy and security.

For tech leaders, you kind of don’t have a choice. Your organization needs to get involved in shaping AI-driven digital identity solutions. By adopting these technologies now AND following the principles that exist to make it safe for your employees and customers, you will improve your organization’s security and efficiency. If you don’t, the hackers of the world will thank you.

And if you’re an individual contributor like me, stay on top of the tech news for the latest in security recommended practices. Look for any open calls for comments on the standards and principles that impact this space.

Of course, if you’d like to outsource paying attention to all this and get someone to write a monthly report on the latest, reach out to me, and we’ll see what’s possible.

The post Digital Identity in the Age of AI: Challenges and Opportunities appeared first on Spherical Cow Consulting.


Ontology

Unleash Your Inner Ontonaut with OntoNex Level

Are you ready to take your journey with Ontology to the next level? Introducing the OntoNex Level Program — our latest initiative designed to reward you for being an active part of the Ontology community. Whether you’re a conversation starter, network builder, or community guardian, there’s a role for you to shine and earn rewards along the way. What’s OntoNex Level All About? The Onto

Are you ready to take your journey with Ontology to the next level? Introducing the OntoNex Level Program — our latest initiative designed to reward you for being an active part of the Ontology community. Whether you’re a conversation starter, network builder, or community guardian, there’s a role for you to shine and earn rewards along the way.

What’s OntoNex Level All About?

The OntoNex Level Program is more than just a rewards system; it’s a pathway for you to maximize your potential within the Ontology ecosystem. Each role is tailored to match your strengths and passions, allowing you to contribute meaningfully and earn coins that can be redeemed for exclusive rewards.

The Roles: Chatster: Energize the community with engaging conversations. Unlock achievements and earn coins with every message. Inviter: Grow our network by inviting new members. Earn 10 coins for each successful invite. Guard: Maintain a safe and welcoming environment by reporting spam. Earn 10 coins for every spam report. Helper: Share your Ontology knowledge by assisting others. Earn 10 coins for each helpful interaction. Campaigner: Participate in various community campaigns and events. Earn 5 coins for every event you join. Level Up and Unlock Exclusive Rewards

As you accumulate coins, you can redeem them for special rewards:

100 coins: Buy a Loyal NFT Plus. 2000 coins: Unlock the ‘Monthly NFT Receiver’ role, and receive an NFT every month. 5000 coins: Unlock the ‘Weekly NFT Receiver’ role, and receive an NFT every week. Track Your Progress

Stay on top of your achievements with these simple commands:

/achievement: See your progress in completing achievements. /coins: Check your current coin balance. /buy: Purchase items with your coins. /item: View the items you already own. Join Us on Discord!

Ready to dive in? The best way to get started is by joining our Discord community, where you can take on your role, engage with fellow Ontonauts, and start earning rewards today. Click here to join our Discord.

Unleash Your Inner Ontonaut with OntoNex Level was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 25. August 2024

KuppingerCole

WAF, WAAP, What? The Evolution of Web Application Firewalls

What makes a Web Application Firewall (WAF) a Web Application and API Protection (WAAP) solution? How is the landscape of the market changing and does every organization need a WAAP solution? Tune in to this episode of the Analyst Chat with guest Osman Celik and host Matthias Reinwarth to learn more. Dive deeper into the topic

What makes a Web Application Firewall (WAF) a Web Application and API Protection (WAAP) solution? How is the landscape of the market changing and does every organization need a WAAP solution? Tune in to this episode of the Analyst Chat with guest Osman Celik and host Matthias Reinwarth to learn more.

Dive deeper into the topic



Friday, 23. August 2024

Elliptic

OFAC targets Russian war effort with 400 sanctions, identifying a crypto address connected to KB Vostok

The US Treasury’s Office of Foreign Assets Control (OFAC) has today issued sanctions against nearly 400 individuals and entities whose products and services enable Russia to sustain its war effort and evade sanctions.  Amongst those sanctioned today is KB Vostok (A.K.A. Vostok Design Bureau) a drone manufacturer which specialises in the “development of industrial-grade unmanned ae

The US Treasury’s Office of Foreign Assets Control (OFAC) has today issued sanctions against nearly 400 individuals and entities whose products and services enable Russia to sustain its war effort and evade sanctions. 

Amongst those sanctioned today is KB Vostok (A.K.A. Vostok Design Bureau) a drone manufacturer which specialises in the “development of industrial-grade unmanned aerial vehicles”. 


Dock

ISO 18013-5 Standard: What It Is And How It Works

With the growing adoption of digital identity initiatives, it has become more complex to ensure security, interoperability, and compliance, requiring adherence to rigid and evolving standards. This is where ISO 18013-5 comes into play, offering a standardized approach to secure and verify digital identities. It's

With the growing adoption of digital identity initiatives, it has become more complex to ensure security, interoperability, and compliance, requiring adherence to rigid and evolving standards.

This is where ISO 18013-5 comes into play, offering a standardized approach to secure and verify digital identities. It's the backbone of mobile driver’s licenses (mDL) implementations, providing guidelines that enhance trust and facilitate verification processes.

In this post, we'll explore ISO 18013-5, covering its definition, benefits for governments, businesses, and individuals, and development history.

Full article: https://www.dock.io/post/iso-18013-5


KuppingerCole

The Anatomy of Cyber Resilience

by Osman Celik In today's business landscape, cyber resilience is crucial for an organization's ability to sustain operations and deliver desired outcomes in the face of cyber threats or incidents. Cyber resilience encompasses not only the prevention and protection against cyber threats but also the ability to detect, respond to, and recover from them effectively. While often confused with cybers

by Osman Celik

In today's business landscape, cyber resilience is crucial for an organization's ability to sustain operations and deliver desired outcomes in the face of cyber threats or incidents. Cyber resilience encompasses not only the prevention and protection against cyber threats but also the ability to detect, respond to, and recover from them effectively. While often confused with cybersecurity, cyber resilience serves a distinct purpose within an organization's risk management strategy.

Cybersecurity vs. Cyber Resilience

Cybersecurity primarily focuses on protecting systems, networks, and data from unauthorized access. This is achieved through mechanisms such as firewalls, encryption, detection and response systems, and identity and access management. In contrast, cyber resilience goes a step further by ensuring business operations continue during and after a cyber incident. While cybersecurity aims to prevent incidents, cyber resilience assumes that breaches may occur and emphasizes maintaining business continuity and facilitating swift recovery.

The Inevitable Future with AI

As AI continues to integrate into our daily lives, it is inevitable that it will play a significant role in maintaining business continuity. However, this development presents both opportunities and challenges. On one hand, AI-powered tools enhance cyber resilience by improving detection and response times, as well as predicting and mitigating potential vulnerabilities. These technologies enable more sophisticated automation and reduce the impact of human error. On the other hand, AI also introduces new risks, as attackers leverage the same technologies to develop more advanced and sophisticated attacks.

Developing Cyber Resilience Strategies

Creating effective cyber resilience strategies involves thorough risk assessment, proactive planning, and continuous improvement. Organizations must begin by identifying their critical assets and assessing potential threats to understand their specific cyber threat landscape. With this information, they can establish a tailored cyber resilience framework.

A robust cyber resilience framework typically includes preventive measures like regular security updates and employee training, alongside incident detection and response protocols. Building resilience also requires regularly testing recovery and backup plans. Organizations should adapt their strategies based on lessons learned from past incidents and anticipate future challenges, which requires expertise, skill, and informed predictions.

Key Components of Cyber Resilience

Cyber resilience provides organizations with clear guidelines on restoring operations after a cyber incident. This involves well-defined recovery plans that are regularly tested and updated to address emerging vulnerabilities. Identifying critical systems and data is a priority, allowing organizations to focus their recovery efforts where they are needed most.

A cornerstone of cyber resilience is data backup. Without a reliable backup, a recovery plan is essentially ineffective. Backup strategies should be integrated into the broader resilience framework, with backups regularly updated and securely stored in multiple locations to protect against cyber threats. The emphasis is not just on creating backups but also on ensuring the ability to quickly access and restore data from these backups without compromising security or operational continuity.

Choosing the Right Frameworks for Your Cyber Resilience Strategy

When developing a cyber resilience strategy, organizations should consider key frameworks. The NIST (National Institute of Standards and Technology) Cybersecurity Framework offers a well-established approach with its six pillars: Identify, Protect, Detect, Respond, Recover, and Govern. Additionally, regulations such as DORA (Digital Operational Resilience Act) and NIS2 (Network and Information Systems Directive 2) should be reviewed, particularly by organizations operating within the European Union, to ensure that backup and recovery strategies are compliant and robust.

We are back in town - cyberevolution 24

We are excited to invite you to our cyberevolution event in Frankfurt on December 3-5, 2024. We will be exploring a wide range of cybersecurity topics, with plenty of chances to chat with industry experts. Cyber resilience will be one of the big topics on the agenda. In a combined session, Mike Small will discuss “Why you need data backup and how AI can help” and Joshua Hunter will provide insights into “Focus on Cyber Resilience - Prepare, Respond, Resume”. We look forward to seeing you there and have some great discussions.


auth0

Developer Day 2024: A Sneak Peek

Take a sneak peek at DevDay. We have created 24 hours of content for you to level up your identity skills through talks, panel discussions, labs and much more!
Take a sneak peek at DevDay. We have created 24 hours of content for you to level up your identity skills through talks, panel discussions, labs and much more!

Thursday, 22. August 2024

Spruce Systems

Debunking Myths about the Mobile Driver's License

Learn about some of the common misconceptions when it comes to mobile driver's licenses (mDLs).

While artificial intelligence is in the spotlight, a quieter technology revolution is underway: a large-scale push to build secure digital identity systems. This is, in part, driven by verifiable digital identity being a complementary technology to AI. With AI-generated text, images, and increasingly convincing videos, having a way to verify something or someone is provably who or what they claim to be will be crucial. The heightened security of encryption-backed identity can dramatically mitigate types of fraud, hacking, and impersonation.

Building digital ID is largely a problem of coordination – getting buy-in for a novel system from everyone from legislators to major enterprises to state agencies. One early leader in contention for defining the digital ID future is a set of standards known as “mDL,” or the Mobile Drivers License – a real, state-issued credential stored on a mobile device. The mDL is just one part of the fast-growing digital identity ecosystem, but it’s being used in our pilot program with the state of California and other pilots across the United States.

You might have some preconceptions about how a driver’s license that lives on a mobile device works based on your familiarity with other digital services, such as logging in to a website. But this new generation of credentials is built much differently, using recent innovations in cryptographic digital signatures.

This makes digital credentials, like a mobile driver’s license, far more secure and private than a web-based service, among other implications. But to understand this new kind of security and privacy, you have to leave behind some old ideas.

The “Photo of a Plastic ID” Myth

A mobile driver's license (mDL) is far more than just a digital image of your physical ID. Unlike a simple photo, an mDL is embedded with cryptographic digital signatures, ensuring that the data it contains is both tamper-evident and provably authentic. This means that anyone verifying your ID, whether in person or online, can trust that the information hasn’t been altered, providing higher security and trust than a static image.

One of the key advantages of mDLs is their versatility in both physical and digital realms. Whether you're verifying your identity in person, such as at a traffic stop or an airport, or over the internet for online services, mDLs offer a seamless digital verification experience. This flexibility is something a static image on your phone just can’t offer, especially as our lives become more intertwined with digital interactions.

While a photo of your ID reveals all your personal details, a significant benefit of mDLs is the ability to share only the necessary information for a specific interaction, rather than revealing all the personal details on your driver's license. For example, if you're buying age-restricted products, the mDL can confirm your age without exposing your address or other sensitive information. This minimal disclosure feature enhances privacy and reduces the risk of identity theft.

Finally, mDLs are built on global standards like ISO/IEC 18013-5 and ISO/IEC 18013-7, which means they can be accepted across industries and borders. A photo of your ID might be accepted in some places, but it lacks the standardization needed for widespread trust and interoperability. These standards ensure that mDLs can be trusted by various entities, from law enforcement agencies to financial institutions, no matter where you are. This broad acceptance and reliability make mDLs a future-proof solution for secure identity verification in our interconnected world.

The “Phone Home” Myth

If you’re still new to the idea of the mobile driver’s license, you might assume they offer less privacy than a hard-copy ID. From bank accounts to college enrollment, we’ve become very used to proving our identity by sending a password to a remote database over the internet. Similarly, you might assume that a mobile driver’s license may require pinging back to a government agency server whenever someone wants to verify your identity. If that were how a mobile driver’s license worked, it would create yet another trail of data that could be used to track you, like many web services do today. This is known as the “phone home” problem.

To be clear, mobile driver's license programs can be implemented in that way, creating (even inadvertently) a new surveillance system. But there are ways to implement mobile driver's licenses that don't have to "phone home," - which is how we approach our implementations at SpruceID in our work with customers.

The mDL standard is ultimately a shared data format, and the systems around it can be built in many ways, but the core mDL architecture can be implemented using an entirely new kind of digital “proof” that checks the validity of an ID issuer’s digital signature locally, called “device retrieval” in the mDL specification. That means no pinging a remote server, and no risky data trail.

Instead, a mobile driver’s license (or other digital credential) is verified by a file on your device. That includes a private digital “signature” proving that it’s from the correct issuer, like the DMV. The signature corresponds to a private key held by the issuing agency that is secret, so no one but the DMV can issue DMV-signed credentials; it’s tied to your specific hardware device, so the file itself can’t be copied; and it’s cryptographically signed to your identity information, so it can’t be tampered with. 

The “Supercookies” Myth

Even if a digital identity check doesn’t create a real-time trail of digital pings over the internet, an ID check can still leave a record on the device or system of the verifier. For instance, when you buy a case of beer, the liquor store might not ping the DMV’s server – but it will probably retain a record of the verification. 

These records can be a risk to your privacy.  If a 3rd party gathers together the scattered records of your ID checks, they can create a record of some of your activities – for instance, how often you visit the liquor store. This is a widespread practice when it comes to records of your web browsing – the collated records of your online activity are known as “supercookies,” and are often used to target you with advertising.

This risk is a good example of how regulation and best practices are necessary complements to new technology – new laws, or reasonable disclosure frameworks, might be needed to ban the practice of making real-world supercookies. However, there’s also a more immediate solution: the issuers of digital credentials can impose data-deletion policies that require verifiers to delete records of identity checks. 

With a few exceptions, such as law enforcement, verifiers should be okay with deleting these records immediately, significantly reducing supercookie risk. Best of all, there are cryptographic methods for proving that the data is actually disposed of.

This is a great example of a key principle in digital credential design. The mobile driver’s license (mDL) is a data standard for digital identity, but many of the systems around that data standard can be designed in many different ways. Some ways of building an mDL system might enable or even encourage archiving data to build a “supercookie,” but systems can also be built to discourage or disallow them. 

By the same token, other digital credential standards, including SD-JWTs and W3C Verifiable Credentials, can also be deployed in ways that enable tracking. In essentially every case, no tech standard can guarantee user privacy; therefore, how the system is designed, and how that design is guided by regulations and agreements, is key.

Technology, Legislation, and Markets In Harmony

Unfortunately, the greater privacy and control enabled by encryption-based digital identity won’t just happen magically. While the technology has the potential to create a more innovative and secure system, the specific way it is built in the coming years will determine whether that potential is fulfilled. 

Many of the teams building these systems have the highest ideals, and are already working to build privacy-preserving features into their structure. But technology alone isn’t enough, in this case, or in general: technology and policy must work in concert to create the future we want.

We believe the best way to guarantee a future identity system that’s both secure and private is legislation that supports the goals of the technology. That legislation, which organizations like the ACLU are currently pushing forward, would bar abuses like surveillance using digital identity – whether for commercial purposes, or more nefarious ones.

We encourage all players in the digital identity space, and potential future users of tools like the mobile driver’s license, to participate in those legislative efforts. Done right, they will help make sure that an exciting new technology supports freedom, safety, and innovation, working together as one.

Are you interested in learning more about digital credentials such as the mobile driver’s license and how they might work for your use case? Explore our website to learn more.

Learn More

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


IdRamp

MS Entra ID: Advanced Account Recovery with Identity Verification

IdRamp has partnered with Microsoft (MS) to bring Identity Verification (IDV) to the Entra ID account recovery process. Account takeover attacks increased by 350% last year, causing nearly $13 billion in losses. The post MS Entra ID: Advanced Account Recovery with Identity Verification first appeared on Identity Verification Orchestration.

IdRamp has partnered with Microsoft (MS) to bring Identity Verification (IDV) to the Entra ID account recovery process. Account takeover attacks increased by 350% last year, causing nearly $13 billion in losses.

The post MS Entra ID: Advanced Account Recovery with Identity Verification first appeared on Identity Verification Orchestration.

Caribou Digital

Rethinking innovation funding in the age of AI

Applicants can now use generative AI to craft powerful funding proposals. What does it mean for organizations running competitive grants and innovation funds? A significant shift is underway in the ever-evolving landscape of impact investing and competitive grant-making programs. In recent years, artificial intelligence (AI) has become a buzzword in many domains, including in donor funding lands

Applicants can now use generative AI to craft powerful funding proposals.

What does it mean for organizations running competitive grants and innovation funds?

A significant shift is underway in the ever-evolving landscape of impact investing and competitive grant-making programs. In recent years, artificial intelligence (AI) has become a buzzword in many domains, including in donor funding landscapes. It is pushing funding organizations to rethink how they approach innovation funding and how to ensure the “do no harm” principle applies when delivering innovation for social, environmental, and economic impact.

At Caribou Digital, we’re keenly focused on how generative AI can impact, modulate, and drive an inclusive and ethical digital world. Large language models (LLMs), like ChatGPT, have particularly piqued our interest in our fund management work and are causing us to reflect on our approaches and practices. This blog post highlights some of these reflections.

Embracing LLMs in grant-writing: A double-edged sword

ChatGPT’s emergence has brought about three critical lessons for consideration:

1) Generative AI can break down barriers to applying for grants (like time and skill gaps)

ChatGPT and other LLMs are impressively proficient in writing grant applications. There are even some LLMs focused specifically on grant writing, like Grantable and others. The “traditional” grant application process has been a grueling task. It can be complex, time-consuming, and disempowering for applicants. It often takes senior staff away from their day-to-day duties and regularly offers no reward for their efforts. Applicants are commonly unsuccessful because they fail to clearly and effectively convey their idea, innovation, or project plan. However, new tools — from Grammarly to grant-writing LLMs — have the potential to save applicants time and money in this process. They can make grant-writing more accessible and less intimidating, as well as reduce language barriers or address accessibility issues for applicants with disabilities.

2) Generative AI makes it easier to communicate compelling ideas clearly

Encouraging AI in grant proposals can democratize idea sharing, allowing for a broader range of applicants to present their visions compellingly and coherently. AI could level the playing field for small organizations with limited or no access to experienced grant writers. Or, fund managers may see that grant applicants with disabilities and those who are neurodiverse are better able to write applications without worrying about how their dyslexia (for example) might limit their chances of funding success. So, it is Caribou Digital’s theory that a more diverse pool of applicants can now complete grant applications quickly and unlock critical funding.

3) Generative AI, if used effectively by fund managers, can encourage “unusual suspects” to apply to their grant programs

By lowering the traditional barriers to entry for grants, like time and language costs, LLMs open doors for a more diverse pool of innovators.

Here’s a case study to demonstrate how LLMs could reach “unusual suspect” innovators.

As a fund manager, Caribou Digital usually requests grant applications in a single language: English. This is mainly because we manage grants in English, so all our policies, templates, and tools for tracking require input in English. We understand this immediately creates a bias against non-native English speakers, who have to convey complex, often technical ideas in their second or third language. If innovators could apply for community-based projects in more relevant languages (e.g., Swahili, Luganda, Arabic, Bengali, etc.), would more people apply with truly exciting and/or community-based ideas? Today, even basic LLM translation services can enable small, community-based organizations to quickly submit quality applications. Hypothetically, these tools would allow us to receive applications in local dialects and engage throughout the grant period in some of those languages, even if our team doesn’t have fluency in the selected language. But we also need to be highly conscious that these bold changes to processes could also contribute new biases, as LLMs are well known to be poor advocates for generating high-quality content in non-English languages. (See, for example, this article on AI language equity issues from Rest of World.) Photo by Igor Omilaev on Unsplash How can we identify authentic talent? Why we are rethinking our practices

In the context of generative AI and grant-making, fund managers need to be acutely aware of how biases could get built into project design. Even without the widespread use of LLMs, there is almost always bias in the selection of grants. It is therefore logical to assume LLMs can exacerbate existing (or even create new) bias in grant-awarding processes.* This selective bias makes it incredibly challenging to engage with grant-making tools. It is our responsibility as fund managers to actively ensure no conscious or unconscious bias is introduced into the process.

If, for example, fund managers allow AI tool use in grant applicants, we must also invest in a rigorous evaluation of bias, perhaps even involving undercover critical colleagues as independent teams to reduce bias in application processes. By doing so, we ensure that using AI in grant-making processes does not inadvertently perpetuate existing inequities.

While AI can polish and perfect an application, it’s essential to develop mechanisms that enable fund managers to capture the authentic talent behind “artificial intelligence.” It’s time to rethink how we structure our submission practices and interfaces. We must find ways for applicants to demonstrate their authentic selves beyond the more polished face that LLMs and other AI tools can provide. This requires a fundamental shift in our approach: embracing AI where it enhances equity and inclusion while remaining vigilant against its potential to introduce new forms of bias.

At Caribou Digital, we’re committed to exploring innovative methods that allow for a more genuine representation of applicants’ potential. By doing so, we can ensure that the best ideas, no matter where they come from, have a fair chance to shine. We’re currently thinking about ways we can support genuineness in applications, such as:

Allowing applicants to provide a video application (rather than solely text-based applications). Reducing or removing the need for computer access by running an application process on WhatsApp or mobile phone, for example. Plugging into existing platforms that allow applications to be submitted from an existing profile or organizational presence (e.g., f6s or Linkedin). Working with community-based organizations who can make initial recommendations or referrals on behalf of potential grantees and omitting lengthy written applications.

We know that none of these ideas will exclude bias in grant applications and assessments (some might even exacerbate it). However, AI tools in grant-writing have highlighted the need for innovation in how we assess authenticity and potential, and it’s time to test some new and innovative approaches to assessing innovation.

Please reach out if you’d like to discuss this further.

*The perception of bias varies widely; what seems unbiased to one person may be seen differently by someone with a different background or political belief. One excellent showcase of some examples of this is the Rest of the World AI series.

Rethinking innovation funding in the age of AI was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


Verida

Verida and Marlin: A Partnership to Power Private AI

Verida and Marlin; Powering the next generation of private AI Verida is on a mission to empower individuals to own and control their data, ultimately enabling a future of private AI. This vision involves creating a decentralized ecosystem where personal data can be securely managed, processed, and utilized without compromising privacy. This collaboration will enable developers to build their
Verida and Marlin; Powering the next generation of private AI

Verida is on a mission to empower individuals to own and control their data, ultimately enabling a future of private AI. This vision involves creating a decentralized ecosystem where personal data can be securely managed, processed, and utilized without compromising privacy.

This collaboration will enable developers to build their own Private AI Assistants. Imagine an AI like ChatGPT, but with 100% end-to-end privacy — working exclusively for you. One private vault, with multiple data sources.

To achieve this, Verida is building a robust infrastructure stack that includes:

Verida Private Data Bridge: Enabling seamless data transfer from various platforms to a user’s Verida vault. Confidential Compute: Utilizing Trusted Execution Environments (TEEs) to create a network of secure, isolated infrastructure nodes for data processing. Private Compute: Building upon confidential compute to provide granular user control over data access, usage and application deployment.

At the heart of Verida’s vision lies private AI, where AI models can be trained and operated on personal data while preserving user privacy. This requires a robust infrastructure capable of handling sensitive data securely and efficiently.

TEEs play a pivotal role in this ecosystem by providing secure, isolated environments for data processing and AI computations. However, deploying and managing TEE-based applications can be complex. This is where Marlin’s Oyster comes in.

Oyster is a TEE coprocessor for AI, designed to simplify the development and deployment of AI applications that require high levels of security and privacy. By leveraging Oyster, Verida will:

Accelerate AI development: Oyster’s platform provides a ready-made infrastructure for deploying confidential AI applications, saving development time and resources for the development of Verida’s confidential compute network. Enhance AI security: Oyster’s TEE-based architecture strengthens the security of AI models and data, protecting sensitive information from unauthorized access and connecting to Verida’s existing confidential storage network. Optimize AI performance: Oyster’s focus on performance can help the Verida confidential compute network deliver faster and more efficient AI experiences for users.

Chris Were, CEO of Verida, expressed his enthusiasm for the partnership:

“We have been very impressed with the Marlin technology and the team as we have collaborated on our PoC over the past several months. There is a significant shortage of privacy-preserving computation options today, so it’s been refreshing to work with a great team and quickly put together a powerful demonstration of what’s possible.”

Esli, Head of Ecosystem at Marlin Foundation, added

“By combining Verida’s technology with Marlin’s confidential compute platform, it is possible to unlock the power of a truly private AI assistant. This solution ensures that users’ personal information remains confidential even when training the assistant on their data.”

This partnership between Verida and Marlin represents a significant step forward in the development of private AI. By combining Verida’s vision with Marlin’s cutting-edge technology, we are creating a foundation for a future where individuals have complete control over their data and how it’s used by AI.

Together, Verida and Marlin are committed to empowering individuals and building a world where data privacy and AI coexist harmoniously.

About Verida Network

Verida is a decentralized network that empowers individuals to take control of their personal data, enabling secure storage, sharing, and management. Verida’s infrastructure supports private AI, decentralized identity (DID), and verifiable credentials, all while ensuring that users maintain ownership and control over their data. Verida’s mission is to create a future where users can harness the power of AI without compromising privacy or security.

For more information, visit Verida Network.

About Marlin

Marlin is a verifiable computing protocol featuring TEE and ZK-based coprocessors to delegate complex workloads over a decentralized cloud. Servers provisioned using smart contract calls host ML models, gateways, frontends, MEV or automation bots, or backends for arbitrary computations using external APIs with baked-in auto-scaling and fault tolerance. Marlin is backed by Binance Labs and Electric Capital.

For more information, visit Marlin

Verida and Marlin: A Partnership to Power Private AI was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


Evernym

New Trends in Access Management: Embracing the Future of Security

New Trends in Access Management: Embracing the Future of Security In today’s digital world, access management... The post New Trends in Access Management: Embracing the Future of Security appeared first on Evernym.

New Trends in Access Management: Embracing the Future of Security In today’s digital world, access management is more critical than ever. Organizations are increasingly recognizing the need to protect their data and systems from unauthorized access while providing seamless user experiences. The landscape of access management is evolving rapidly, with new ...

The post New Trends in Access Management: Embracing the Future of Security appeared first on Evernym.


Ocean Protocol

Ocean Nodes Incentives Update: Start Date & Dashboard Upgrades

In this post, we provide important updates on the Ocean Nodes Incentive Program and the rollout of Ocean Nodes Boosters (ONBs) Introduction We’ve been hard at work addressing some issues with the Incentives Dashboard and making sure everything is fair and square for everyone participating in the Ocean Nodes Incentives Program. Today, we’re back with an important update on the program and so

In this post, we provide important updates on the Ocean Nodes Incentive Program and the rollout of Ocean Nodes Boosters (ONBs)

Introduction

We’ve been hard at work addressing some issues with the Incentives Dashboard and making sure everything is fair and square for everyone participating in the Ocean Nodes Incentives Program. Today, we’re back with an important update on the program and some enhancements to the Ocean Nodes Dashboard.

Our commitment to fairness and transparency is driving these changes, and we want to ensure that everyone in the community has a level playing field. Here’s what you need to know.

Incentives Program Update: Why We’re Moving the Start Date

As you noticed, we recently encountered an issue with the monitoring system responsible for tracking node uptime and eligibility for incentives. The great news, and main reason for this blogpost, is to let you know that thanks to our team’s hard work, we’ve swiftly solved the bugs and fixed the logic fault we identified in the eligibility checks.

However, to make sure everything is running smoothly and fairly, we’ve decided to push the start of the incentives program until August 29. This allows us to roll out the necessary backend updates and ensures that our monitoring system is robust and reliable, creating a fair environment for all participants.

Ocean Nodes Dashboard: New Features and Improvements

While we work on the backend updates, we’ve already moved forward and added some improvements to the Ocean Node Dashboard:

Enhanced Table Functionality: We’ve added sorting, filtering, and the ability to select which columns you want to view. This gives you more control over how you interact with the data.

New Columns: “Reward Eligibility” and “Eligibility Issue”:

Reward Eligibility: Indicates if your node is eligible for incentives and ONBs. Eligibility Issue: If your node isn’t eligible, this column will explain why. Currently, you might see messages like “Node cannot be accessed publicly, no public IP announced by the node!” or “Ocean Protocol Foundation node!”. In time we will try to provide more detailed information here.

Please note that the uptime that you see here is for the current epoch, meaning that every Thursday, the uptime will reset to 0 for all nodes. We’re storing all historical data, and in a future update, we’ll introduce more options for data visualization.

However, until we push the backend update, the values in the “Reward Eligibility” column might still be inaccurate. We’re working hard to solve this as soon as possible.

Steps to Install the Node and Be Eligible for Rewards

To help you get started and ensure your node is eligible for rewards, follow these steps:

Find your public IP: You’ll need this for the configuration. You can easily find it by googling “my IP” Run the Quickstart Guide: If you’ve already deployed a node, we recommend either redeploying with the guide or ensuring that your environment variables are correct and you’re running the latest version Get your Node ID: After starting the node, you can retrieve the ID from the console Expose Your Node to the Internet: From a different device, check if your node is accessible by running
telnet {your ip} {P2P_ipV4BindTcpPort}

2. To forward the node port, please follow the instructions provided by your router manufacturer — ex: Asus, TpLink, Huawei, Mercusys etc.

Verify eligibility on the Ocean Node Dashboard: Check https://nodes.oceanprotocol.com/ and search for your peerID to ensure your node is correctly configured. Considerations

As Ocean Nodes are currently in an alpha stage, please remember to:

Regularly update your deployment to maximize uptime. Account for potential issues such as node bugs*, internet disruptions, and more when measuring uptime. *Report bugs in our dedicated Discord channel so we can address them as soon as possible. When reporting, please include useful information such as the environment variables (excluding private keys), hardware specifications, and relevant logs. Please remember NOT to share your private key with anybody. Note: The current uptime may not be accurate as we’ve been testing and the monitoring system has been off multiple times. The uptime will reset on Thursday, August 29, at 00:00 UTC. Ocean Nodes Boosters (ONBs): Criteria and Distribution

For Phase 1 ONBs, which will grant a 1.5 rewards multiplier, we will consider node uptime. Starting on August 29, we’ll begin tracking uptime across the first four epochs, which will run from August 29 to September 26.

At the end of this period, the top 50 nodes with the highest uptime will each receive a Phase 1 ONB. If multiple nodes have the same uptime, we’ll mint additional ONBs to ensure that no one is left out.

To qualify for ONBs and incentives, your node must meet the following criteria:

Public Accessibility: Nodes must have a public IP address API and P2P Ports: Nodes must expose both HTTP API and P2P ports to facilitate seamless communication within the network Conclusion

We appreciate your patience and understanding as we work through these updates. Our goal is to ensure that the Ocean Nodes Incentives Program is fair and rewarding for everyone involved. Thank you for your continued support!

Stay tuned for more updates by following us on X and joining the discussion in our Discord Server.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Ocean Protocol is a founding member of the ASI Alliance.

Follow Ocean on Twitter or Telegram to keep up to date, and Predictoor’s Twitter for its news. Chat directly with the Ocean community on Discord. Track Ocean’s tech progress directly on GitHub.

Ocean Nodes Incentives Update: Start Date & Dashboard Upgrades was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ockto

Open Banking & PSD2; Over o.a. inkomensverificatie met banktransacties

Podcast Open Banking & PSD2 Over o.a. inkomensverificatie met banktransacties In deze aflevering van de Data Sharing Podcast duiken we in de wereld van Open Banking en PSD2. Open Banking stelt consumenten en bedrijven in staat om financiële gegevens te delen met derde partijen, wat nieuwe kansen biedt voor innovatie en dienstverlening binnen de financiële sector.  Dankzij O
Podcast Open Banking & PSD2
Over o.a. inkomensverificatie met banktransacties

In deze aflevering van de Data Sharing Podcast duiken we in de wereld van Open Banking en PSD2. Open Banking stelt consumenten en bedrijven in staat om financiële gegevens te delen met derde partijen, wat nieuwe kansen biedt voor innovatie en dienstverlening binnen de financiële sector. 

Dankzij Open Banking kunnen organisaties snel en nauwkeurig inkomensgegevens van potentiële klanten verifiëren, wat leidt tot efficiëntere en betrouwbaardere kredietbeoordelingen en andere (financiële) diensten.


Ocean Protocol

DF103 Completes and DF104 Launches

Predictoor DF103 rewards available. DF104 runs Aug 22 — Aug 29, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 103 (DF103) has completed. DF104 is live today, Aug 22. It concludes on August 29. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE re
Predictoor DF103 rewards available. DF104 runs Aug 22 — Aug 29, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 103 (DF103) has completed.

DF104 is live today, Aug 22. It concludes on August 29. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF104 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF104

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF103 Completes and DF104 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 21. August 2024

Elliptic

The US stablecoin landscape: leveraging Ecosystem Monitoring to build trust

The United States policy and regulatory landscape remains in significant flux when it comes to the topic of stablecoins. 

The United States policy and regulatory landscape remains in significant flux when it comes to the topic of stablecoins. 


Lockstep

What do verifiable credentials verify?

Verifiable credentials are one of the most important elements of digital identity today. What exactly does a verifiable credential verify? And while we’re on the subject, what is a credential anyway? Let’s start with existing analogue credentials. Thanks to English, “credential” can be a verb or a noun. And the noun can take two or... The post What do verifiable credentials verify? appeared firs

Verifiable credentials are one of the most important elements of digital identity today.

What exactly does a verifiable credential verify?

And while we’re on the subject, what is a credential anyway?

Let’s start with existing analogue credentials. Thanks to English, “credential” can be a verb or a noun. And the noun can take two or three very different meanings.

Photo credit: Akbar Nemati via Pexels.

Credentialing

The noun credential usually refers to “a qualification, achievement, quality or aspect of a person’s background, especially when used to indicate their suitability for something” (Ref: Oxford Languages).

There’s a subtle implication in the everyday sense of the word: a credential is generally associated with the criteria for its particular quality and suitability.

Consider professional credentials.  A budding accountant for instance must obtain a particular degree by passing certain tests set by a university; in addition, that degree needs to be deemed suitable by a professional accounting body.

So in this sense, every credential is an abstraction which represents that the holder has satisfied certain rules. A credential has meaning and context.

As a verb, “credential” means to provide someone with credentials.  This might seem obvious, but I think it’s the more important sense of the word. A credentialing process is a formal (rules-based) sequence of events, which has usually been designed to establish the holder’s suitability to undertake specific activities. There is a tight relationship between the credentialing process and the intended use of the credential.

Examples include the onboarding of new employees, enrolment in university courses, admission to professional associations (including recognition of international qualifications), approval of journalists to attend special events such political conventions, security clearances, and nations’ citizenship requirements.

Credentialing processes are famously conservative. They are the sovereign stuff of nations, academic institutions, and professional societies. Right or wrong, professional credentials are notoriously provincial and difficult to have recognised between different jurisdictions. Credentialling bodies zealously represent communities of interest and reserve the right to set rules as they see fit.

Going from physical to digital credentials

Traditionally, many credentials have been physically manifested as cards, membership tokens and other badges, used by the holder to prove their status to other parties who need to know. These items provide a number of familiar cues to assure us that a credential is genuine, the issuer is legitimate, and the credential hasn’t been modified. Some include photographs which help to show that the credential is in the right hands when presented.

By the way, the plastic card itself is sometimes called a “credential”, but it is more useful to think of it as a carrier or container of credentials, especially as we shift from analogue to digital.

Yet in the move to digital, most credentials in the abstract sense have retained their essential meaning. For example, a government authorised Medicare provider or licenced plumber should be able to assert precisely the same authority in any of their digital workflows—nothing less and nothing more—as they do in the real world.

Credit cards as credentials

A credit card is a token which signifies that the holder is a paid-up member of a payment scheme. The principal data carried by a credit card is a specially formatted number (known as the Primary Account Number or PAN) which encodes membership of the scheme, identifying the cardholder, the scheme and the issuing bank. Note that a credit card is a container that usually carries just one credential.

Credit card numbering has remained unchanged for decades. With the introduction of electronic commerce, shoppers were able to use their card numbers online, thanks to Mail Order / Telephone Order (MOTO) rules. These has been established years before e-commerce, to allow merchants to accept plaintext card numbers in card-not-present (CNP) settings.

To combat CNP fraud, the Card Verification Code (CVC) was introduced — an additional number on the back of the credit card that would not be registered by merchants’ card imprinting machines and then vulnerable to dumpster diving identity thieves.

The CVC is a classic example of security metadata — an additional signal used to confirm the data that really matters, namely the credit card number. Credit card call centre operators had access to back-office lists of PANs and matching CVCs; if a caller could quote the CVC correctly, it was assumed they had the physical card in their hands.

Enter cryptography

Verifiable credentials (sometimes “VCs” for short) are the strongest mechanism today for asserting important personal attributes, such as driver licences, professional qualifications, vaccinations, proof of age, payment card numbers and so on. VCs are central to the next generation European Union Digital Identity (EUDI), the ISO 18013-5 standard mobile driver licences (mDLs) and the latest digital wallets.

Several new VC data structure standards are under development, including the World Wide Web Consortium (W3C) VC data model and ISO 18013-5 mdocs.

All forms of VC include the following:

information about a particular “Subject” (usually a person, also referred to as the credential holder) such as a licence number or other credential ID a name for the Subject (typically a legal name but pseudonyms are sometimes possible) the digital signature of the issuer usually a public key of the Subject (used to verify signed presentations of the VC made from a cryptographic container or wallet) metadata about the credential (such as its validity period and the type of container it is carried in) and metadata about the issuer (such as a company legal name, corporate registration number, Ts&Cs for credential usage etc.).

The digital signature of the issuer preserves the provenance of a verifiable credential: anyone relying on the VC can be assured of its origin and be confident that the credential details have not been altered.

When a VC is presented from a cryptographically capable wallet, a message or transaction incorporating the credential can also be digitally signed using a private key unique to the credential. This assures the receiver that the credential as presented was in the right hands.

Verifiable presentation proves the proper custody and control of the credential and is just as important as verifiability of a credential’s origin.

Telling the story behind the credential

Provenance and secure custody are unique assurances provided by verifiable credentials, but I think the greater power of this technology lies in the depth of the metadata.

VCs deliver rich ‘fine print’ about the credential, the issuer, the wallet and the way in which it was presented, all reliably bound together through digital signatures. So whenever you use a VC to access a resource or sign a piece of work, you leave behind an indelible mark that codifies the history of your credential.

As mentioned, a credential is issued through a formal process, and is recognised by a community of interest as signifying the suitability of its holder for something.

For a person to hold a verifiable credential in a personal cryptographic wallet, a series of specific steps must have taken place.

First and foremost, the Issuer will satisfy itself that the Subject meets all the credentialling requirements. A VC usually carries a public key unique to the Subject and their wallet; this physicality means the Issuer can be sure that it hands out its credentials only to the correct individuals. It also allows the Issuer to specify the precise type of device(s) used to carry its credentials — all the way down to smart phone model and biometric performance if those things matter under the Issuer’s security policy.

Virtual credit cards in digital wallets

Continuing our look at credit cards as credentials, the provisioning of virtual credit cards to mobile wallets illustrates the degree of control that a VC issuer has over the end-to-end process.

Typically, a virtual credit card is provisioned to a digital wallet via a mobile banking app running on the same device. Banks control over how their apps are activated. Almost anyone can download a banking app from an app store but only a genuine customer can get the app to do anything, following their bank’s prescribed activation steps (which might include e.g. entering account specific details, calling a contact centre, or even visiting a branch for additional checks). Only then will the bank send secure instructions to the device to load a virtual card. The customer will need to unlock their phone (by biometric or PIN) to complete the load.

Behind the scenes, any bank offering mobile phone credit cards must have also made prior arrangements with the phone manufacturer to gain access to the hardware. Apple and Google (the major digital wallet platforms) undertake rigorous due diligence so that only legitimate banks are granted this all-important power.

All this history is coded as metadata into the verifiable credential. When a merchant point-of-sale system receives a signed payment instruction from a digital wallet, we can all be sure that:

the digital wallet has been unlocked by someone who controls the phone the credit card is genuine and was issued by the bank indicated in the credential the card was loaded to the wallet by a customer who was approved to use the mobile banking app and was authenticated to do so (making it highly likely that the mobile phone customer and the cardholder are the same person) the cardholder is a registered customer of the bank and has passed that bank’s KYC processes.

The VC can include the type of phone it is carried in; it is even possible for the VC to record if the virtual card was issued remotely or in-person.

Minimalist VCs

The acute problem with online authentication today—often given the catch-all label “identity theft”— arises from the use of plaintext credentials and identifiers.

There are countless scenarios where a counterparty needs to know you have a particular credential, but if the only evidence you can provide is a plaintext number, then businesses and individuals alike are sitting ducks because so many identifiers have been stolen in data breaches and traded on black markets.

The simplest, lowest risk solution is to conserve the important IDs we are all familiar with, but harden them in digital form, so they cannot fall into criminal hands.

That might sound complicated, but we have done it before!

The transition from magnetic stripe to chip payment cards was made for exactly the same reason: to eliminate plaintext data.  Chip cards present cardholder data through digitally signed verifiable messages — making them one of the earliest examples of verifiable credentials.

Digital wallets use the same technology as chip cards and are rapidly taking over from plastic. The Reserve Bank reports that well over one third of card payments by Australian consumers are now made through mobile wallets. Yet as we have seen, the meaning and business context of credit cards were unchanged through the course of these technology upgrades. That conservation of credentialing processes was key to the chip revolution.

Minding your business

In any digital transformation, it is not the new technology that creates the most cost, delay and risk; rather it’s the business process changes. The greatest benefit of verifiable credentials is they can conserve the meaning of the IDs we are all familiar with, and all the underlying business rules.

The real power of VCs lies not in what they change but what they leave the same!

A minimalist verifiable credential carrying a government ID means nothing more and nothing less than the fact that the holder has been issued that ID. By keeping things simple, a VC avoids disturbing familiar trusted ways of dealing with people and businesses.

Powerful digital wallets are being rapidly embraced by consumers; modern web services are able to receive credentials from standards-based devices. We are ready to transform all important IDs from plaintext to verifiable credentials. Most people now could present any important verified data with a click in an app, with the same convenience, speed and safety as showing a payment card. With no change to backend processes and credentialing, we would cut deep into identity crime and defuse the black market in stolen data.

The post What do verifiable credentials verify? appeared first on Lockstep.

Tuesday, 20. August 2024

Spruce Systems

SpruceID Joins NIST National Cybersecurity Center of Excellence (NCCoE) to Accelerate Mobile Driver’s License Adoption

Learn about the current initiative, benefits of the mobile driver's license, and how SpruceID will collaborate with the NCCoE.

SpruceID is participating in the National Cybersecurity Center of Excellence (NCCoE) Accelerate Adoption of Digital Identities on Mobile Devices Consortium. This initiative will help define and facilitate a reference architecture for digital credentials that protect privacy, are implemented securely, enable equity, are widely adoptable, and are easy to use.

Understanding the Initiative

The National Institute of Standards and Technology (NIST) National Cybersecurity Center of Excellence (NCCoE) is a collaborative hub where industry, organizations, government agencies, and academic institutions work together to address businesses’ most pressing cybersecurity challenges.

The NCCoE is playing a pivotal role in expediting the adoption of mobile driver's license (mDL) standards and best practices. In partnership with technology vendors (including SpruceID), government agencies, regulatory bodies, standards organizations, and entities aiming to implement mDLs, the NCCoE is kicking off an initiative to build a reference architecture that showcases practical, real-world business use cases. This initiative will integrate mDLs with commercially available technologies and embed them into existing business processes:

“Whether boarding a plane, creating a bank account, or making an online purchase, mobile driver’s licenses (mDLs) and other digital credentials have the potential to improve the way we conduct transactions, both in person and online. To help realize this potential, the NCCoE is collaborating with more than a dozen partners from across the mDL ecosystem to build out reference implementations and to accelerate the adoption of mDL standards and best practices.” 

- Bill Fisher, co-lead of the NIST mDL project, NIST National Cybersecurity Center of Excellence

This reference implementation aims to promote standards and best practices for mDL deployments and address mDL adoption challenges. Over the next two years the project will produce guidance addressing:

Know Your Customer/Customer Identification Program Onboarding and Access which will demonstrate the use of an mDL and/or Verifiable Credentials (VC) for establishing and accessing an online financial account.  U.S. Federal Government Credential Service Provider (CSP) and Federation which will demonstrate the use of an mDL and/or VC for establishing a CSP account to access federated agency systems. Healthcare and Electronic Prescribe which will demonstrate the use of an mDL and/or VC for provider access and prescription uses. Benefits of the Mobile Driver’s License

Physical driver’s licenses were not designed for our online world. The current best practice for online identity verification asks users to take a picture of their driver’s license with a smartphone and to answer knowledge-based questions. The efficacy of these methods is being eroded by new technology, such as AI-generated images of driver’s licenses accurate enough to bypass document scanning tools and the ability of bad actors to get ahold of the information needed to answer knowledge-based questions.

mDLs function much like a traditional driver's license, carrying information such as name, date of birth, and address but in a digital format accessible through a dedicated mobile application, often referred to as a digital wallet. Compared to physical driver’s licenses, mDLs have several capabilities that make them easier to use with online and digital transactions:

mDLs are underpinned by public key cryptography, making the credential cryptographically verifiable. mDLs can be integrated natively with device biometrics for user verification. mDLs can communicate natively between two mobile applications but also in cross device flows between mobile applications and the web browser on a laptop or tablet. mDLs offer the potential for selective disclosure, allowing users to pick and choose which information to share with third parties.

Transactions at financial institutions, healthcare providers, government services, and many other organizations could benefit from enhanced customer experiences, more accurate identity verification, and reduced fraud if they supported mDLs.

How SpruceID will Collaborate with NCCoE

SpruceID is proud to have been selected to partner with the NCCoE to expedite the adoption of mobile driver’s license standards and best practices. Several of our contributions to this project will include:

Coordinate and collaborate with other parties to demonstrate success for the Financial Services Sector CIP/KYC use case, serving the primary role of a Wallet Provider. The use of our open-source libraries, including the SpruceKit Wallet, an application holding mDoc and Verifiable Credential that can interact over the internet and app-to-app using 18013-7 and OpenID4VP. Bring our expertise and learnings from interoperability test events that we previously hosted for ISO/IEC 18013-7 in August 2023 and from the development and deployment of the California DMV mobile driver’s license application.

We look forward to leveraging our unique knowledge and expertise to help drive this initiative forward.

Stay up to Speed

Interested in learning more and staying up to date with major milestones? Attend upcoming mDL events and follow along for updates on the NCCoE website mDL home page.

Attend Upcoming Events

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


KuppingerCole

Some Direction for AI/ML-ess Marketing

by John Tolbert For the last few years, we have been inundated with messaging about Artificial Intelligence (AI). AI is no longer a term mostly used by academicians, IT professionals, or sci-fi fans. Those in the IT security field have seen AI, ML (Machine Learning), and Generative AI (GenAI) proliferating in marketing, while product developers look for ways to incorporate these technologies into

by John Tolbert

For the last few years, we have been inundated with messaging about Artificial Intelligence (AI). AI is no longer a term mostly used by academicians, IT professionals, or sci-fi fans. Those in the IT security field have seen AI, ML (Machine Learning), and Generative AI (GenAI) proliferating in marketing, while product developers look for ways to incorporate these technologies into products. Vendors touting some variation of artificial intelligence in their products have garnered more investment. There have been productivity gains. But has “AI/ML” as a marketing term peaked?

A recent study in the Journal of Hospitality Marketing & Management, titled “Adverse impacts of revealing the presence of “Artificial Intelligence (AI)” technology in product and service descriptions on purchase intentions: the mediating role of emotional trust and the moderating role of perceived risk” shows that consumers are put off by the use of “AI” in product marketing. Some of the reasons cited include a lack of trust for AI, a lack of transparency about AI usage, and concerns about privacy. Although this study focused on consumer goods and services, do the lessons learned apply to IT, and specifically cybersecurity?

 I recently returned from Black Hat 2024 in Las Vegas. While there was plenty of AI, ML, and GenAI signage in booths on the show floor, how vendors are marketing these technologies in products seems to be shifting a bit. Security practitioners are and have been aware of the presence and need for machine learning in products for many years. An example isthe use of ML detection models in Endpoint Protection Detection and Response (EPDR) products to identify new variants of malware. It is infeasible to build an EPDR solution today that does NOT use ML, given the volume of malware variants discovered every day. AI/ML is not new in the market, and it is not new to those of us working in the field. Perhaps this realization among product marketing teams is another reason why the messaging is changing and needs to evolve further.

2023 was certainly the year of GenAI, with large language models (LLM) capturing not only the attention of the public but also becoming mainstream tools. Vendors large and small rushed to find ways to get GenAI into products. Such objectives are innovative, and can result in improvements in usability, but not always. Customers of IT security solutions may be skeptical about unqualified claims of how GenAI improves those products.

Continuing with the EPDR example, several vendors have natural language query interfaces powered by GenAI, guided investigation tools for analysts informed by AI, and executive level reports drafted by GenAI. These have the potential to save time and improve organizational security posture for customers. However, there are concerns about the quality of the output. Can it be trusted? AI outputs have explainability problems. Moreover, since the outputs from AI tools depend on the quality and relevance of the data in their models, how are security vendors getting a sufficient quantity of relevant data, and how do they assess the veracity of the outputs of their LLM functions? How can customers be assured that data governance and security policies are applied to the data from their organizations?

In discussing LLMs, how they work, and answering questions about whether LLMs lie or hallucinate in the Journal of Ethics and Information Technology, Hicks, Humphries, and Slater state that LLMs are “not designed to represent the world at all; instead, they are designed to convey convincing lines of text.” In the proceedings of the 2022 Conference on Human Information Interaction and Retrieval, Bender and Shah said about LLMs: “No reasoning is involved […]. Similarly, language models are prone to making stuff up […] because they are not designed to express some underlying set of information in natural language; they are only manipulating the form of language.”

At this point, IT (and especially IT security) vendors and their product marketing teams would be better served by providing more information about their use of ML and GenAI in their solutions. Assume you have a tech savvy audience, because you do. What kinds of AI technology are you using? For which functions is it being used? Where are you getting data for model training? How are you doing quality control on the outputs before releasing it customers? These are the kinds of questions that buyers of security solutions have.

Join us in December in Frankfurt at our cyberrevolution conference, where we will continue to dissect how AI is used in cybersecurity.

See some of our other articles and videos on the use of AI in security:

Cybersecurity Resilience with Generative AI Generative AI in Cybersecurity – It's a Matter of Trust ChatGPT for Cybersecurity - How Much Can We Trust Generative AI? Asking Good Questions About AI Integration in Your Organization Asking Good Questions About AI Integration in Your Organization – Part II

Elliptic

Crypto regulatory affairs: Fed undertakes enforcement against Customers Bank for digital asset risk management gaps

The Federal Reserve Board has sent a warning to banks about the importance of addressing cryptoasset risk exposure through a recent and landmark enforcement action.

The Federal Reserve Board has sent a warning to banks about the importance of addressing cryptoasset risk exposure through a recent and landmark enforcement action.


1Kosmos BlockID

Four Ways to Align Authentication with Business Needs

In a hybrid world that blends on-premises and cloud-based resources, securing access to sensitive data and systems is no longer achieved by defending a perimeter, but through authentication. While authentication technologies have evolved over the past decades from their humble password origins, preventing unauthorized access still hinges on choosing and implementing the right identity-based contro

In a hybrid world that blends on-premises and cloud-based resources, securing access to sensitive data and systems is no longer achieved by defending a perimeter, but through authentication. While authentication technologies have evolved over the past decades from their humble password origins, preventing unauthorized access still hinges on choosing and implementing the right identity-based controls.

This involves navigating a landscape where knowledge-based, possession-based, biometric, and multi-factor authentication (MFA) methods offer a variety of advantages and limitations. Let’s consider each of the options available to organizations and how to select the right mix of controls to improve their security posture.

Knowledge-Based Authentication

Knowledge-based authentication (KBA), which encompasses passwords and PINs, is the most traditional form of authentication. Its widespread adoption and user familiarity make it a convenient starting point for many security protocols. However, its susceptibility to social engineering, phishing attacks, and the perennial issue of weak password creation by users necessitate a cautious approach. For environments where ease of use is paramount and risk levels are comparatively low, KBA can serve as a component of a more comprehensive security strategy, particularly when augmented with additional authentication factors.

Knowledge-based authentication (KBA) is best suited for environments with comparatively low risk levels, where ease of use is paramount and the accessed information is not highly sensitive or critical. It can serve as a supplementary authentication factor in conjunction with other methods, such as biometric or device-based authentication. Examples include accessing non-critical information, utilizing KBA alongside other authentication methods as a first factor, and implementing it in public Wi-Fi hotspots for streamlined user access without compromising security.

Possession-Based Authentication

Possession-based authentication methods require users to have a physical object, such as a security token or a mobile device, to gain access. This approach adds a tangible layer of security, making it harder for attackers to gain unauthorized access without physical possession of the required object. It’s particularly effective in scenarios where additional security is needed without significantly complicating the user experience, such as in financial transactions or access to high-security areas. However, the risk of loss or theft and the potential cost implications of deploying hardware devices must be considered.

Possession-based authentication methods offer heightened security measures for a range of scenarios, including financial transactions, remote work access, secure online transactions, and compliance-driven environments like legal and government agencies. In online banking, users require physical possession of a security token or mobile device to access their accounts securely. Similarly, in remote work settings, this method ensures that only authorized employees with designated devices can connect to corporate networks and sensitive data, mitigating risks associated with unauthorized access. Additionally, in e-commerce platforms and online payment systems, possession-based authentication enhances transaction security, reducing the risk of fraud and protecting sensitive financial information. Furthermore, compliance-driven industries can benefit from this approach to meet regulatory obligations and safeguard confidential information.

Biometrics

Biometric authentication offers a high-security level by utilizing unique user characteristics like fingerprints, facial recognition, or iris scans. This method is highly resistant to traditional hacking attempts and provides a seamless user experience. It is well-suited for environments where security cannot be compromised, such as in government or healthcare settings. Nevertheless, concerns around privacy, the potential for spoofing, and the need for compatible hardware investments can pose challenges. Organizations must weigh these factors against the critical need for secure and user-friendly authentication mechanisms.

Biometric authentication, which leverages unique user characteristics like fingerprints, facial recognition, or iris scans, is ideal for various high-security environments. It is best suited for secure access to sensitive data and fortifying high-risk online systems. Despite its advantages, organizations must consider privacy concerns, potential spoofing, and compatible hardware investments when deploying biometric authentication systems.

Multi-Factor Authentication (MFA)

MFA combines two or more authentication methods listed above to create a layered security approach, significantly enhancing protection against various threats. By integrating knowledge, possession, and biometric factors, MFA creates a dynamic defense mechanism that is much harder for attackers to bypass. This method is ideal for protecting sensitive data and critical systems, offering a balanced solution that addresses the vulnerabilities inherent in single-method authentication systems. While MFA introduces complexity and potential user resistance, its ability to significantly reduce security risks makes it a vital component of modern cybersecurity strategies.
Multi-factor authentication (MFA) is a versatile security method that finds applications across industries, serving to protect sensitive data and critical systems. More commonly, MFA is required to ensure secure access to corporate systems from outside the office, and in e-commerce platforms to safeguard customer accounts and high-risk customer and citizen transactions. Overall, MFA provides a defense mechanism against various threats, combining multiple authentication factors to significantly enhance security and mitigate risks inherent in single-method authentication systems.

Passwordless

Passwordless authentication represents a significant leap forward in cybersecurity, eliminating the vulnerabilities associated with traditional knowledge-based methods. The majority of authentication methods included in the above still require a user name AND password as a first step in authenticating users. But, by leveraging biometrics, mobile devices, or security keys, passwordless systems offer a user-friendly and highly secure alternative that reduces the risk of phishing, password theft, and unauthorized access. This method is particularly advantageous in creating a seamless user experience without compromising security, and ideal for environments aiming to minimize friction while maintaining high security standards. Organizations looking to bolster access security while enhancing user satisfaction should consider integrating passwordless authentication into their strategic security framework, offering an optimal balance between ease of use and robust protection.
Organizations across diverse sectors, particularly those looking for a better, more secure user experience, should carefully consider integrating passwordless authentication into their security frameworks. By leveraging biometrics, mobile devices, or security keys, passwordless systems offer a robust and user-friendly alternative to traditional password-based methods, effectively mitigating the risks associated with phishing, password theft, and unauthorized access. This approach not only enhances security posture but also fosters a seamless and efficient user experience, aligning with the modern landscape of digital operations where stringent security measures and user satisfaction are paramount.

Choosing the Right Strategy

The choice of authentication method should be driven by an organization’s specific needs, considering factors such as the sensitivity of the data, user experience requirements, and regulatory compliance mandates. Here are four key considerations for selecting the appropriate authentication method:

Risk Assessment: Evaluate the level of security risk associated with the data or systems being protected. Higher risk scenarios may warrant more stringent authentication methods, such as biometric or MFA. User Experience: Consider the impact on the user. While security is paramount, overly cumbersome authentication processes can lead to poor compliance and user frustration. Cost and Infrastructure: Assess the financial and infrastructure implications of deploying new authentication technologies. While advanced methods like biometric authentication offer enhanced security, they also come with higher implementation costs. Compliance Requirements: Ensure that the chosen authentication method aligns with industry regulations and standards, which may dictate specific security measures.

Defending against increasingly sophisticated cyber threats requires understanding the unique advantages and limitations of available authentication methods, and selecting the controls that are best aligned with organizational needs and user expectations. Using the methods described above can help define an authentication strategy that ensures security measures remain robust, responsive, and user-friendly.

The post Four Ways to Align Authentication with Business Needs appeared first on 1Kosmos.


Ocean Protocol

Ocean Nodes Incentives: A Detailed Breakdown

This blog post will provide a detailed breakdown of the incentive mechanism for Ocean Nodes, including who is eligible and when rewards will be distributed. With the recent launch of Ocean Nodes, a peer-to-peer (P2P) network that allows users to run all components of the Ocean Protocol stack — such as Ocean Provider, Aquarius, and Compute-to-Data — within a single component, we are excited to unv
This blog post will provide a detailed breakdown of the incentive mechanism for Ocean Nodes, including who is eligible and when rewards will be distributed.

With the recent launch of Ocean Nodes, a peer-to-peer (P2P) network that allows users to run all components of the Ocean Protocol stack — such as Ocean Provider, Aquarius, and Compute-to-Data — within a single component, we are excited to unveil the Ocean Nodes Boosters (ONBs), the Soulbound Tokens used in the incentive system.

This article dives into the details of how the incentives work, including the eligibility criteria and the timeline for reward distribution.

Understanding Ocean Nodes Boosters (ONBs)

The Ocean Nodes Boosters (ONBs) are non-transferrable ERC721 tokens, best known as Soulbound Tokens, that work as a key incentive mechanism to measure and maintain a high degree of Nodes availability in the network. These tokens provide reward multipliers based on Node uptime, incentivizing reliable participation in the network.

Here’s how the Ocean Nodes Boosters (ONBs) are structured across different launch phases:

Phase 1 ONB (ONB1): 1.5x reward multiplier Phase 2 ONB (ONB2): 1.3x reward multiplier Phase 3 ONB (ONB3): 1.2x reward multiplier

The reward multipliers increase depending on the combination of ONBs:

ONB1 + ONB2: 1.8x reward multiplier ONB1 + ONB3: 1.7x reward multiplier ONB1 + ONB2 + ONB3: Maximum 2x reward multiplier

The maximum reward multiplier a node can achieve is 2x if it holds all three ONBs, providing a powerful incentive to participate across all phases of the Ocean Nodes launch.

Uptime & Rewards Calculation

The Ocean Nodes incentive structure is designed to reward nodes that maintain high availability and uptime. The Ocean Protocol Foundation will allocate 5,000 $FET each week to nodes that demonstrate a high level of uptime. Rewards are calculated using the following formula:

R0 = Xt * U0 / Ut

Where:

R0 = Total Rewards earned

Xt = Total Rewards available

U0 = Node Uptime in seconds

Ut = Total Uptime per week, in seconds

Note: The Ocean Protocol Foundation nodes are excluded from these reward calculations, ensuring a fair distribution of incentives to independent participants.

Now, let’s go through an example, to illustrate how this works. We will look at a scenario involving four nodes:

Node A: 10 sec uptime in Epoch X, holding ONB1 (1.5x reward multiplier) in their wallet Node B: 20 sec uptime in Epoch X, no ONB Node C: 10 sec uptime in Epoch X, holding ONB1+ONB2+ONB3 Node D: run by Ocean Protocol Foundation, therefore excluded from rewards

A Node’s adjusted uptime is their uptime*multiplier. So using the scenario above:

Node A’s adjusted uptime = 10 seconds * 1.5 = 15 seconds Node B’s adjusted uptime = 20 seconds (no multiplier) Node C’s adjusted uptime = 10 seconds * 2 = 20 seconds Node D is excluded from rewards Total Uptime = 15 seconds (Node A) + 20 seconds (Node B) + 20 seconds (Node C) = 55 seconds Total rewards for the week = 5,000 $FET
 — — — — — — — — — — — — — — — — — — – Node A’s share = 15/55 ≈ 27.27% ≈ 1,364 $FET Node B’s share = 20/55 ≈ 36.36% ≈ 1,818 $FET Node C’s share = 20/55 ≈ 36.36% ≈ 1,818 $FET Node D is excluded from rewards = 0 $FET Eligibility for Incentives

To be eligible for incentives, nodes must meet specific criteria to ensure only active and publicly accessible nodes are rewarded. The following requirements must be met:

Public Accessibility: Nodes must have a public IP address API and P2P Ports: Nodes must expose both HTTP API and P2P ports to facilitate seamless communication within the network

Users can verify Nodes eligibility by connecting to the Ocean Nodes dashboard and checking for a green status indicator next to their IP address.

Steps to Install the Node and Be Eligible for Rewards

To help you get started and ensure your node is eligible for rewards, follow these steps:

Find your public IP: You’ll need this for the configuration. You can easily find it by googling “my IP” Run the Quickstart Guide: If you’ve already deployed a node, we recommend either redeploying with the guide or ensuring that your environment variables are correct and you’re running the latest version Get your Node ID: After starting the node, you can retrieve the ID from the console Expose Your Node to the Internet: From a different device, check if your node is accessible by running
telnet {your ip} {P2P_ipV4BindTcpPort}

2. To forward the node port, please follow the instructions provided by your router manufacturer — ex: Asus, TpLink, Huawei, Mercusys etc.

Verify eligibility on the Ocean Node Dashboard: Check https://nodes.oceanprotocol.com/ and search for your peerID to ensure your node is correctly configured. Considerations

As Ocean Nodes are currently in an alpha stage, please remember to:

Regularly update your deployment to maximize uptime. Account for potential issues such as node bugs*, internet disruptions, and more when measuring uptime. *Report bugs in our dedicated Discord channel so we can address them as soon as possible. When reporting, please include useful information such as the environment variables (excluding private keys), hardware specifications, and relevant logs. Please remember NOT to share your private key with anybody. Note: The current uptime may not be accurate as we’ve been testing and the monitoring system has been off multiple times. The uptime will reset on Thursday, August 29, at 00:00 UTC. Reward Distribution & Timing

Rewards for node operators are calculated on a weekly basis, using Epochs to track uptime and performance.

Epoch Timing: Each epoch begins on Thursday at 00:00 UTC Reward Distribution: While rewards are calculated weekly, the distribution may occur a few days or weeks after the epoch ends. This delay is intended to optimize for gas fees and ensure efficient transactions; however, there is a possibility that rewards could be distributed on the same day the epoch ends, depending on network conditions. Conclusion

Ocean Nodes represent a significant step forward for decentralized AI development and data sharing. The incentive structure, highlighted by the introduction of the Ocean Nodes Boosters (ONBs) ensures that active and reliable nodes are rewarded proportionally, towards a healthy and sustainable network.

To start running your node today access the Ocean Nodes README, and follow the Quickstart guide available in the main repository for detailed instructions on deployment.

By becoming part of the Ocean Nodes now, you’re contributing to the evolution of decentralized AI and also positioning yourself to benefit from the growing opportunities within the Ocean Protocol ecosystem.

Stay tuned for more updates by following us on X and joining the discussion in our Discord Server.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Ocean Protocol is a founding member of the ASI Alliance.

Follow Ocean on Twitter or Telegram to keep up to date, and Predictoor’s Twitter for its news. Chat directly with the Ocean community on Discord. Track Ocean’s tech progress directly on GitHub.

Ocean Nodes Incentives: A Detailed Breakdown was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Verida

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part…

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part 1) This is the first of three posts over the next three weeks to release the “Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI” and was originally published by Chris Were, CEO and co-founder at Verida. Introduction Verida’s mission has always b
Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part 1)

This is the first of three posts over the next three weeks to release the “Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI” and was originally published by Chris Were, CEO and co-founder at Verida.

Introduction

Verida’s mission has always been clear: empower individuals to own and control their data. Now, we’re taking it further.

This Technical Litepaper presents a high-level outline of how the Verida Network is growing beyond decentralized, privacy preserving databases, to support decentralized, privacy-preserving compute optimized for handling private data. There are numerous privacy issues currently facing AI that web3 and decentralized physical infrastructure networks can help solve. From Verida’s perspective, this represents an expansion of our mission from allowing individuals to control their data to introducing new and powerful ways for users to benefit from their data.

Current AI Data Challenges

We are running out of high-quality data to train LLMs

Public internet data has been scraped and indexed by AI models, with researchers estimating that by 2026, we will exhaust high-quality text data for training LLMs. Next, we need to access private data, but it’s hard and expensive to access.

Private enterprise and personal AI agents need to access private data

There is a lot of excitement around the next phase of AI beyond chat prompts. Digital twins or personal AI agents that know everything about us and support every aspect of our professional and personal lives. However, to make this a reality AI models need access to private, real time context-level user data to deliver more powerful insights and a truly personalized experience.

Existing AI platforms are not private

The mainstream infrastructure providers powering the current generation of AI products have full access to prompts and training data, putting sensitive information at risk.

AI trust and transparency is a challenge

Regulation is coming to AI and it will become essential that AI models can prove the training data was high quality, ethically sourced. This is critical to reduce bias, misuse and improve safety in AI.

Data creators aren’t being rewarded

User-owned data is a critical and valuable resource for AI and those who create the data should benefit from its use. Reddit recently sold user data for $200M, while other organizations have reached similar agreements. Meta is training its AI models on user data from some countries, but excluding European users due to GDPR preventing them from doing so without user consent.

Verida’s Privacy Preserving Infrastructure

Verida has already developed the leading private decentralized database storage infrastructure (see Verida Whitepaper) which provides a solid foundation to address the current AI data challenges.

Expanding the Verida network to support privacy-preserving compute enables private, encrypted data to be integrated with leading AI models, ensuring end-to-end privacy, safeguarding data from model owners. This will unlock a new era of hyper-personal and safe AI experiences.

AI services such as ChatGPT have full access to any information users supply and have already been known to leak sensitive data. By enabling model owners access to private data, there is increased risks of data breaches, imperiling privacy, and ultimately limiting AI use cases.

There are three key problems Verida is solving to support secure private AI:

Data Access: Enabling users to extract and store their private data from third party platforms for use with emerging AI prompts and agents. Private Storage and Sharing: Providing secure infrastructure allowing user data to be discoverable, searchable and accessible with user-consent to third party AI platforms operating within verifiable confidential compute environments. Private Compute: Provide a verifiable, confidential compute infrastructure enabling agentic AI computation to securely occur on sensitive user data.

Supporting the above tasks, Verida is building a “Private Data Bridge”, allowing users to reclaim their data and use it within a new cohort of personalized AI applications. Users can pull their private data from platforms such as Google, Slack, Notion, email providers, LinkedIn, Amazon, Strava, and much more. This data is encrypted and stored in a user-controlled private data Vault on the Verida network.

It’s important to note that Verida is not building infrastructure for decentralized AI model training, or distributed AI inference. Rather, Verida’s focus is on providing a high performance, secure, trusted and verifiable infrastructure suitable for managing private data appropriate for AI use cases.

We have relationships with third parties that are building; private AI agents, AI data marketplaces and other privacy-centric AI use cases.

Comparing Current AI Solutions

AI solutions can be deployed primarily through two methods: cloud-based/hosted services or on local machines.

Cloud-based AI services, while convenient and scalable, expose sensitive user data to potential risks, as data processing occurs on external servers and may be accessible to third parties.

In contrast, local AI environments offer enhanced security, ensuring that user data remains isolated and inaccessible to other applications or external entities. However, local environments come with significant limitations, including the need for technical expertise that is not available to the majority of users. Moreover, these environments often face performance challenges; for instance, running large language models (LLMs) on standard consumer hardware is typically impractical due to the high computational demands.

Verida’s Confidential Storage and Compute infrastructure offers alternatives to these approaches.

Comparison of different AI infrastructure options

Apple has recently announced Private Cloud Compute that provides a hybrid local + secure cloud approach. AI processing occurs on a local device (ie: mobile phone) by default, then when additional processing power is required, the request is offloaded to Apple’s servers that are operating within a trusted execution environment. This is an impressive offering that is focused on solving important security concerns relating to user data and AI. However, it is centralized, only available to Apple devices and puts significant trust in Apple as they control both the hardware and attestation keys.

Self-Sovereign AI Interaction Model

Let’s look at what an ideal model of confidential AI architecture looks like. This is an interaction model of how a basic “Self-Sovereign AI” chat interface, using a RAG-style approach, would operate in an end-to-end confidential manner.

Self-Sovereign AI Interaction Model

The End User Application in this example will be a “Chat Prompt” application. A user enters a prompt (i.e., “Summarize the conversation I had with my mates about the upcoming golf trip”).

A Private AI API endpoint (AI Prompt) receives the chat prompt and breaks down the request. It sends a prompt to the LLM, converting the original prompt into a series of search queries. The LLM could be an open source or proprietary model. Due to the confidential nature of the secure enclave, proprietary models could be deployed without risk of IP theft by the model owner.

The search queries are sent to the User Data API which has access to data previously obtained via Verida’s Private Data Bridge. This data includes emails, chat message histories and much more.

The Private AI API collates the search query results and sends the relevant responses and original prompt to the LLM to produce a final result that is returned to the user.

Verida is currently developing a “showcase” AI agent that implements this architecture and can provide a starting point for other projects to build their own confidential private AI products.

Continue reading Part 2.

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part… was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


Indicio

What is DIDComm? (With Pictures!)

The post What is DIDComm? (With Pictures!) appeared first on Indicio.

By Sam Curren

Trusted communication continues to be the internet’s critical missing component, even as our reliance on digital services like healthcare, mobile banking, and payments increases and where seamless, secure interactions are vital. While there are some applications and protocols designed to try to foster secure communication, they are narrow in scope and fail to broadly support the diverse types of communication we need. This shortcoming stems from their fragmented abilities and limited scope. They focus on specific areas of communication, such as simplifying complex login procedures or various security schemes, but they fail to allow the kinds of communication necessary for a variety of online activities. 

The result is an incomplete tech landscape, where direct, secure communication is not fully achievable, where users are left with a fragmented landscape of partial solutions, and a successful zero trust security practice continues to challenge even the most well-resourced organizations. Without holistic and user-friendly solutions that address these shortcomings, true, trusted, general communication on the internet remains an unfulfilled promise.

This is why more industries than ever are turning to decentralized identity and verifiable credentials to solve these missing pieces and why we’ve built DIDComm into the heart of Indicio Proven. While many other standards and protocols are developing to support the simple exchange of information using verifiable credentials, the vast majority of customer use cases that Indicio supports require both sides to authenticate, communicate, and build using the existing infrastructure they’ve already invested in. You can see deployments in travel, financial services, government and more.

The success comes from DIDComm

DIDComm, or DID Communication, is a protocol designed to enable secure and private communication between parties by using decentralized identifiers (DIDs). Unlike traditional methods for trusted connections, DIDComm provides a robust framework for mutual authentication and trusted communication, addressing the gaps in current technologies. DIDComm leverages Verifiable Credentials to add trust to long-term digital relationships. By integrating DIDComm into an existing tech stack or ecosystem, both end users and businesses benefit from enhanced security, privacy, and trust. 

For end users, DIDComm ensures that the communications they have with each other are not only encrypted but also authenticated. This means they are secure from malicious actors impersonating them but also impersonating the business or other entity they are communicating with. This benefits businesses and governments by facilitating secure and seamless interactions with customers, partners, and citizens, while reducing the risk of impersonation, mitigating fraud, and enhancing trust. 

The decentralized nature of DIDComm also means there is no reliance on a central authority, organization, or company to manage the process or facilitate identity (anyone can use software to create a DID with an endpoint for DIDComm and cryptographically prove they control their DID). This increases resilience and reduces security vulnerabilities with a zero trust enhanced architecture. 

Incorporating DIDComm into your digital identity strategy is a game-changing move as it means that all parties in an identity ecosystem or communication channel can confidently authenticate each other and exchange information securely. This removes a fundamental weakness in current identity verification and communication.

The value of DIDComm lies it its ability to enable:

Secure communication: Traditional forms of digital communication, such as email, are often not encrypted at all, likely passing in plain text, meaning anyone who can observe network traffic can read it. And while email can be helpful as it serves both as an identifier and a method to communicate, the lack of secure, easy-to-use encryption creates security vulnerabilities when it comes to relaying sensitive information, such as health and financial records. While there are ways to encrypt email, they are typically clunky and not user-friendly. DIDComm solves this security problem in a way that is user-friendly, offering seamless key management and encryption.

DIDComm also fulfills the need to communicate securely while authenticating the identities of the participants. It requires an identifier that is verifiable and adds the ability to communicate both securely and privately.

Direct connection: DIDComm changes the nature of how we interact online, allowing us to regain the ability to communicate directly with others on the internet without dependence on third party platforms. This direct connection restores the security and trust that were lost with the reliance on intermediaries, such as email clients or social media platforms.

Extensibility: Much like the internet itself, DIDComm is highly extensible. It can be enhanced with capabilities through the design of new protocols. This extensibility allows DIDComm to interact with various things, people, and systems, making it incredibly useful. And where APIs are convenient ways to build complex communication protocols into online interactions, they require constant connection between their source and the end user making them difficult to update and manage, especially if connectivity is lost. DIDComm is optimized for, and extremely compatible with, commonly used devices such as mobile phones and tablets.

Mutual authentication: Authentication from one side of a connection, which many traditional digital identity tools are capable of doing, is not enough. Both parties must be able to verify each other’s identities for there to be truly secure communication. But mutual authentication is rarely straightforward and often requires cumbersome setup and maintenance, which can deter widespread adoption. Applications and protocols also overlook the need for comprehensive privacy measures, failing to protect metadata or ensure data integrity across all layers of communication.

DIDComm enables mutual authentication, providing assurance to both parties in a communication channel that they are who they claim to be. While many existing systems authenticate one side of a connection, such as just identifying the customer or end user, it is equally important that the other side is also authenticated. Think about the phishing scams where fraudsters pretend to be your bank or other service in order for you to share your login information with a bogus website or login portal. DIDComm eliminates this. You’ll always know you are interacting with your bank.

Protocol interoperability: DIDComm can also be used alongside more focused protocols, such as OpenID4VC (which is limited to only the exchange of verifiable credentials and doesn’t provide a generalized method of communication). DIDComm goes beyond single purpose protocols and combines the power of verifiable credentials with extensible communication. The trust gained by the exchange of verifiable credentials can then be used to coordinate powerful interactions, secure messaging, and more.

Until DIDComm, the internet has been missing an easy, comprehensive solution for secure and trusted communication. Applications and protocols built on DIDComm support use cases ranging from communicating with government border authorities for the preclearance of international travelers to businesses and financial institutions offering customized products to customers.

To get involved with DIDComm, individuals and organizations can participate in the work of the Decentralized Identity Foundation (DIF), contributing to the development of standards and protocols, collaborate with industry leaders, stay informed about the latest advancements, and help shape the future of decentralized identity and secure communication.

Indicio has extensive experience with DIDComm, and we’d love to help you integrate Indicio Proven into your existing systems. Reach out to Indicio and learn how DIDComm can empower your organization.

The post What is DIDComm? (With Pictures!) appeared first on Indicio.


Aergo

Aergo V4 Update: New Timeline and Key Considerations

As we continue to refine and enhance the Aergo network, we want to update our community on the revised timeline for the upcoming V4 hard fork. This adjustment allows us to ensure full compatibility with our current enterprise customers and their nodes and address a few minor issues identified during testing. Why the Change? Enterprise Node/Network Compatibility: Our enterprise custome

As we continue to refine and enhance the Aergo network, we want to update our community on the revised timeline for the upcoming V4 hard fork. This adjustment allows us to ensure full compatibility with our current enterprise customers and their nodes and address a few minor issues identified during testing.

Why the Change? Enterprise Node/Network Compatibility: Our enterprise customers play a crucial role in the Aergo ecosystem, and it’s vital that their nodes integrate seamlessly with the upcoming hard fork. We’re taking additional time to thoroughly test and align the upgrade with their specific requirements to ensure this. Minor Issues Identified: During the final stages of testing, a few minor issues were identified that need to be addressed. While these issues do not impact the hard fork's core functionality, resolving them now will prevent potential disruptions and ensure a smooth transition for all participants.

So far, we’ve completed approximately 95% of our Aergo V4 test scripts, but a few tests are still pending to ensure everything functions as expected. This means we will not meet our previously communicated mainnet hard fork target date of the end of August.

New Timeline Current Phase: Ongoing Testing and Final Optimizations with 95% of the Work Completed Testnet Launch: Mid-September Mainnet Hard Fork: End of September

We will continue working with key participants, including node operators, exchanges, and other partners, to ensure all necessary preparations are completed ahead of the new timeline. This includes additional testing, further optimization, and ensuring the community is fully prepared for the transition.

While delays can be challenging, this additional time is essential to ensure the hard fork meets the high standards our clients and community expect. We appreciate your understanding and continued support as we work to deliver a more robust, more reliable Aergo network.

Stay tuned for more updates!

Aergo V4 Update: New Timeline and Key Considerations was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

What Is Password Spraying and How Do You Prevent It?

Learn about password spraying attacks, how they work, and how to defend your organization against them with our comprehensive guide.

Password spraying is an account takeover (ATO) cyberattack where attackers use a single common password or a handful of common passwords to try to access many accounts. This method spreads out login attempts across numerous accounts, making it harder to detect and block.

 

By using password spraying, attackers can effectively take over user accounts, leading to unauthorized access and potential exploitation of sensitive information.

 

These attacks are increasingly common and can lead to data breaches, financial loss, and damage to your organization's reputation. Understanding password spraying and how to defend against it is key to maintaining security.

Monday, 19. August 2024

Microsoft Entra (Azure AD) Blog

Face Check is now generally available

Earlier this year we announced the public preview of Face Check with Microsoft Entra Verified ID – a privacy-respecting facial matching feature for high-assurance identity verifications and the first premium capability of Microsoft Entra Verified ID. Today I’m excited to announce that Face Check with Microsoft Entra Verified ID is generally available. It is offered both by itself and as part of th

Earlier this year we announced the public preview of Face Check with Microsoft Entra Verified ID – a privacy-respecting facial matching feature for high-assurance identity verifications and the first premium capability of Microsoft Entra Verified ID. Today I’m excited to announce that Face Check with Microsoft Entra Verified ID is generally available. It is offered both by itself and as part of the Microsoft Entra Suite, a complete identity solution that delivers Zero Trust access by combining network access, identity protection, governance, and identity verification capabilities.

 

 

  Unlocking high-assurance verifications at scale


There’s a growing risk of impersonation and account takeover. Bad actors use insecure credentials in 66% of attack paths. For example, impersonators may use a compromised password to fraudulently log in to a system. With advancements in generative AI, complex impersonation tactics such as deepfakes are growing as well. Many organizations regularly onboard new employees remotely and offer a remote help desk. Without strong identity verification, how can organizations know who is on the other side of these digital interactions? Impersonators can easily bypass common verification methods such as counting bicycles on a CAPTCHA or asking which street you grew up on. As fraud skyrockets for businesses and consumers, and impersonation tactics have become increasingly complex, identity verification has never been more important.


Microsoft Entra Verified ID is based on open standards, enabling organizations to verify the widest variety of credentials using a simple API. Verified ID integrates with some of the leading verification partners to verify identity attributes for individuals (for example, a driver’s license and a liveness match) across 192 countries. Today, hundreds of organizations rely on Verified ID to remotely onboard new users and reduce fraud when providing self-service recovery. For example, using Verified ID, Skype has reduced fraudulent cases of registering Skype Phone Numbers in Japan by 90%.

 

Face Check with Microsoft Entra Verified ID


Powered by Azure AI services, Face Check adds a critical layer of trust by matching a user’s real-time selfie and the photo on their Verified ID, which is usually from a trusted source such as a passport or driver’s license. By sharing only match results and not any sensitive identity data, Face Check strengthens an organization’s identity verification while protecting user privacy. It can detect and reject various spoofing techniques, including deepfakes, to fully protect your users’ identities.


BEMO, a security solution provider for SMBs, integrated Face Check into its help desk to increase verification accuracy, reduce verification time, and lower costs. The company used Face Check with Microsoft Entra Verified ID to protect its most sensitive accounts which belong to C-level executives and IT administrators.


Face Check not only helps BEMO improve customer security and strengthen user data privacy, but it also created a 90% efficiency improvement in addressing customer issues. BEMO’s help desk now completes a manual identity verification in 30 minutes, down from 5.5 hours before implementing Face Check.


“Security is always great when you apply it in layers, and this verification is an additional layer that we’ll be able to provide to our customers. It’s one more way we can help them feel secure.” – Jose Castelan, Support and Managed Services Team Lead, BEMO

 

Check out the video below to learn more about how your organization can use Face Check with Microsoft Entra Verified ID:

 

 

  Jumpstart with partners


Our partners specialize in implementing Face Check with Microsoft Entra Verified ID in specific use cases or verifying certain identity attributes such as employment status, education, or government-issued IDs (with partners like LexisNexis® Risk Solutions, Au10tix, and IDEMIA). These partners extend Verified ID’s capabilities to provide a variety of verification solutions that will work for your business’s specific needs.


Explore our partner gallery to learn more about our partners and how they can help you get started with Verified ID.

 

Start using Face Check with Microsoft Entra Verified ID


Face Check is a premium feature of Verified ID. After you set up your Verified ID tenant, there are two purchase options to enable Face Check and start verifying:


1. Begin the Entra Suite free trial, which includes 8 Face Check verifications per user per month.
2. Enable Face Check within Verified ID and pay $0.25 per verification.

 

Visit the Microsoft Entra pricing page for more details.

 

What’s Next?


Learn more about how Microsoft Entra Verified ID works and how organizations are using it today, and join us for the Microsoft Entra Suite Tech Accelerator on August 14 to learn about the latest identity management and end-to-end security innovations.

 

Ankur Patel, Head of Product for Microsoft Entra Verified ID

 

 

Read more on this topic 

Watch the Zero Trust spotlight Learn about the Microsoft Entra Suite Learn more about Face Check with Microsoft Entra Verified ID in the FAQ

 

Learn more about Microsoft Entra

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.

Microsoft Entra News and Insights | Microsoft Security Blog⁠Microsoft Entra blog | Tech CommunityMicrosoft Entra documentation | Microsoft Learn

liminal (was OWI)

2024 Liminal Landscape: Your Blueprint for Market Leadership

The post 2024 Liminal Landscape: Your Blueprint for Market Leadership appeared first on Liminal.co.

Ocean Protocol

Predictoor Benchmarking: The Effects of Balancing on Calibrated Linear Classifiers

Comparing Calibrated Lasso (L1) vs Ridge Regression (L2) vs ElasticNet (L1-L2) Classifiers With and Without Balancing Summary This post describes benchmarks of Ocean Predictoor simulations across the Predictoor models: ClassifLinearLasso, ClassifLinearLasso_Balanced, ClassifLinearRidge, ClassifLinearRidge_Balanced, ClassifLinearElasticNet, and ClassifLinearElasticNet_Balanced. The benchmarks com
Comparing Calibrated Lasso (L1) vs Ridge Regression (L2) vs ElasticNet (L1-L2) Classifiers With and Without Balancing Summary

This post describes benchmarks of Ocean Predictoor simulations across the Predictoor models: ClassifLinearLasso, ClassifLinearLasso_Balanced, ClassifLinearRidge, ClassifLinearRidge_Balanced, ClassifLinearElasticNet, and ClassifLinearElasticNet_Balanced. The benchmarks compare the effects of model class balancing on Predictoor profit (accuracy) and trader profit. Each implementation is compared with three different calibrations.

It then proceeds to do a walk-through of each of the benchmark plots for predictoor/trader profit, and comparisons of the models & their calibrations.

1. Introduction 1.1 What is Ocean Predictoor?

For information about Ocean Predictoor, please refer to the Predictoor Series blogpost that catalogs all the blog posts, articles, and talks related to Predictoor. Learn about ML classification, L1 & L2 regularization, calibration, and Predictoor’s simulation tool (“pdr sim”) and (“pdr multisim”) in the Regularized Linear Classifiers With Calibration blogpost.

1.2 What is ML Balancing?

ML balancing are techniques used to adjust the distribution of classes in a dataset to address biasing in the model’s performance for classification problems. Balancing can be achieved through various methods such as undersampling the majority class, oversampling the minority class, or synthetically generating data for underrepresented classes using algorithms like SMOTE (Synthetic Minority Over-sampling Technique). These adjustments help to predict each class equally despite their differing sample sizes.

1.3 Understanding Balancing Implementation

The models in this benchmarking blogpost are implemented with Python scikit-learn’s LogisticRegression() function with the class_weight = “balanced” parameter. The parameter’s balancing formula is detailed in the Appendix.

1.4 Benchmarks Outline

We run benchmarks on the approaches:

ClassifLinearLasso — Implemented with scikit-learn’s LogisticRegression() and L1 Regularization. ClassifLinearLasso_Balanced — Implemented with scikit-learn’s LogisticRegression(), L1 regularization, and class_weight=“balanced”. ClassifLinearRidge — Implemented with scikit-learn’s LogisticRegression() and L2 Regularization. ClassifLinearRidge_Balanced — Implemented with scikit-learn’s LogisticRegression(), L2 regularization, and class_weight=“balanced”. ClassifLinearElasticNet — Implemented with scikit-learn’s LogisticRegression() and L1 & L2 Regularization. ClassifLinearElasticNet_Balanced — Implemented with scikit-learn’s LogisticRegression(), L1 & L2 regularization, and class_weight=“balanced”.

The models are also benchmarked with the same three calibration approaches, None, Isotonic, and Sigmoid, as in the Linear SVM Classifier with Calibration blog post.

1.5 Experimental Setup

The same testing parameters as in the previous blog post, Different Optimizations for Linear SVC, were used in this experimental setup’s my_ppss.yaml file.

2. ClassifLinearLasso With and Without Balancing

Ocean Predictoor’s ClassifLinearLasso and ClassifLinearLasso_Balanced models are implemented with Python scikit-learn’s LogisticRegression() with an L1 regularization & liblinear solver. More information about the liblinear solver is well documented in the previous Different Optimizations for Linear SVC blog post.

2.1.1 Predictoor Profitability

Balancing did not improve the ClassifLinearLasso model’s Predictoor profits. The maximum Predictoor profit achieved by ClassifLinearLasso was 6224.46 OCEAN using a Sigmoid calibration with 1000 training samples of BTC-USDT and an autoregressive_n = 2. However, the ClassifLinearLasso_Balanced model only profited 5226.80 OCEAN. They both used the same tunings to achieve their max Predictoor profits. Adding ETH-USDT data to the training set did not improve returns.

2.1.2 Trader Profitability

Balancing did improve the max trader profit, and the ClassifLinearLasso_Balanced model beat all the other models benchmarked in this blog post. The model gained $351.73 USD using None calibration trained on 1000 BTC-USDT data and with an autoregressive_n = 2. The unbalanced ClassifLinearLasso model’s best trader profit was $324.93 USD using the same tunings as the balanced model’s to achieve max trader profit. As in the Predictoor profit benchmark, adding ETH-USDT data did not improve trader profit returns.

3. ClassifLinearRidge With and Without Balancing

Ocean Predictoor models ClassifLinearRidge and ClassifLinearRidge_Balanced are implemented with Python scikit-learn’s LogisticRegression() with an L2 regularization & LBFGS solver. More information about the LBFGS solver is in the Appendix.

3.1 ClassifLinearRidge & ClassifLinearRidge_Balanced Benchmarks 3.1.1 Predictoor Profitability

Balancing did not improve the ClassifLinearRidge model’s max Predictoor profit. The max Predictoor profit achieved by ClassifLinearRidge was 6051.65 OCEAN gained by using Sigmoid calibration with 1000 training samples of BTC-USDT and an autoregressive_n = 2. The ClassifLinearRidge_Balanced model by comparison, only gained 4313.23 OCEAN and used None calibration with 1000 BTC-USDT training samples and autoregressive_n = 2. Neither profited more from the addition of ETH-USDT data to the training dataset.

3.1.2 Trader Profitability

Balancing did not significantly improve trader profit either. The ClassifLinearRidge model’s max trader profit was $342.54 USD with Isotonic calibration, 1000 training samples of BTC-USDT data, and autoregressive_n = 2. Whereas the top trader profit by the ClassifLinearRidge_Balanced model was $304.62 USD with None calibration, trained on 1000 samples of BTC-USDT & ETH-USDT data, and autoregressive_n = 2.

4. ClassifLinearElasticNet With and Without Balancing

Ocean Predictoor models ClassifLinearElasticNet and ClassifLinearElasticNet_Balanced are implemented with Python scikit-learn’s LogisticRegression() with L1 & L2 regularization & SAGA solver. More information about the SAGA solver is in the Appendix.

4.1 ClassifLinearElasticNet & ClassifLinearElasticNet_Balanced Benchmarks 4.1.1 Predictoor Profitability

Balancing did not improve the Predictoor profit of the ClassifLinearElasticNet model. The max Predictoor profit was 5932.50 OCEAN gained by the unbalanced ClassifLinearElasticNet model with Sigmoid calibration, 1000 samples of BTC-USDT training data, and autoregressive_n = 2. The max Predictoor profit gained by the ClassifLinearElasticNet_Balanced model was 4369.96 OCEAN using None calibration, 1000 training samples of BTC-USDT, and autoregressive_n = 2. Generally, adding ETH-USDT data to the training dataset did not improve Predictoor profitability.

4.1.2 Trader Profitability

Balancing did not improve the maximum trader profit achieved by the ClassifLinearElasticNet model either. The unbalanced model gained $330.53 USD with Isotonic calibration & 1000 training samples of BTC-USDT data with autoregressive_n = 2. Meanwhile the ClassifLinearElasticNet_Balanced model dropped in profitability. The balanced model gained $295.91 USD using None calibration and the same 1000 samples BTC-USDT training set with autoregressive_n = 2.

5. Comparison Analysis 5.1 Highest Predictoor Profits

The highest Predictoor profit of all the benchmarks was 6224.46 OCEAN achieved with an unbalanced ClassifLinearLasso model using a Sigmoid calibration, 1000 training samples of BTC-USDT data & an autoregressive_n = 2. The addition of ETH-USDT data to the training set weighed down Predictoor profits; the max Predictoor profit using BTC-USDT & ETH-USDT training data was 5451.79 OCEAN and was generated by the same ClassifLinearLasso model & tunings. Balancing the models decreased Predictoor profit even further. The max Predictoor profit by a balanced model was 5226.80 OCEAN which was gained by the ClassifLinearLasso_Balanced model using the same tunings as for the unbalanced max profits.

5.2 Highest Trader Profits

The maximum trader profit of all the benchmarks was $351.73 USD and was achieved with the ClassifLinearLasso_Balanced model. The balanced model used None calibration, trained on 1000 BTC-USDT data samples, and had an autoregressive_n = 2. The most profitable unbalanced models all used Isotonic calibration instead. The introduction of ETH-USDT to the training set generally decreased the trader profits.

6. Conclusion

Balancing did not improve Predictoor profits, but it did improve trader profit. The maximum trader profit was $351.73 USD in 5000 iterations and was achieved with the ClassifLinearLasso_Balanced model, beating all the other model benchmarks. The balanced model used None calibration, trained on 1000 BTC-USDT data samples, and had an autoregressive_n = 2. The highest Predictoor profit of all the benchmarks was 6224.46 OCEAN and was gained by an unbalanced ClassifLinearLasso model using a Sigmoid calibration, 1000 training samples of BTC-USDT data & an autoregressive_n = 2.

6.1 Patterns in Model Tuning

The benchmarks consistently showed that using a training set of 1000 samples solely from BTC-USDT data, without incorporating ETH-USDT data, coupled with an autoregressive lookback period of 2, yielded the highest profits across various model configurations, regardless of whether they were balanced or unbalanced. This specific setup likely maximized profitability by focusing on the more predictable patterns of Bitcoin transactions and efficiently leveraging short-term historical data to inform trading decisions. However, this approach may cause overfitting when predicting other market conditions or cryptocurrencies since the model’s strong performance on this narrowly defined dataset and lookback period may not generalize well.

6.2 Maximizing Predictoor Profitability

In all unbalanced model benchmarks, the maximum Predictoor profits were gained using a Sigmoid calibration. In all the balanced model benchmarks, the maximum Predictoor profits were generated using None calibration.

6.3 Maximizing Trader Profitability

An interesting pattern emerged about the trader profits: either balanced models using None calibration or unbalanced models using Isotonic calibration yielded the top trader profits. These configurations appeared to minimize losses / maximize profits as in a confidence-based trading system. However, combining both balancing and Isotonic calibration did not maximize the trader profits.

6.4 Balancing with None Calibration

The combination of None calibration with balancing improved the Predictoor & trader profits. Without balancing, None calibration caused all the models to perform poorly. Therefore, balancing the models appeared to inversely affect the performance of None calibration compared to the unbalanced models.

7. Appendix: Tables 7.1 ClassifLinearLasso Data Table

A highlight from the ClassifLinearLasso data table is that this data includes the maximum Predictoor profit of all the models, 6224.46 OCEAN. This max was generated with the ClassifLinearLasso model using a Sigmoid calibration with 1000 training samples of BTC-USDT and an autoregressive_n = 2. The table also shows how Isotonic calibration helped the model achieve a strong trader profit and that the inclusion of ETH-USDT data did not improve profitability.

7.2 ClassifLinearLasso_Balanced Data Table

A noteworthy data point from the ClassifLinearLasso_Balanced data table is that it includes the max trader profit of all the benchmarks. The ClassifLinearLasso_Balanced model gained $351.73 USD using None calibration training on 1000 BTC-USDT data samples and with an autoregressive_n = 2. The data table also shows that balancing decreased Predictoor profits & the inclusion of ETH-USDT data generally decreased profitability overall.

7.3 ClassifLinearRidge Data Table

The data table for the ClassifLinearRidge model shows that it achieved a max Predictoor profit of 6051.65 OCEAN by using Sigmoid calibration with 1000 training samples of BTC-USDT and an autoregressive_n = 2. This calibration was also used with the ClassifLinearLasso model to generate its max Predictoor profit. It also matches the ClassifLinearLasso data in that an Isotonic calibration improved trader profit returns. The inclusion of ETH-USDT data decreased profitability.

7.4 ClassifLinearRidge_Balanced Data Table

Balancing did not improve either the ClassifLinearRidge model’s max Predictoor profit or trader profit. The ClassifLinearRidge_Balanced model only gained a max Predictoor profit of 4313.23 OCEAN and used None calibration with 1000 BTC-USDT training samples and autoregressive_n = 2. The top trader profit by the ClassifLinearRidge_Balanced model was $304.62 USD with None calibration, trained on 1000 samples of BTC-USDT & ETH-USDT data, and autoregressive_n = 2. The addition of ETH-USDT data to the training dataset decreased profitability.

7.5 ClassifLinearElasticNet Data Table

As the ClassifLinearElasticNet model uses both L1 & L2 regularization, then it is expected that it shows similar behavior as both ClassifLinearLasso and ClassifLinearRidge — this is exactly what the data shows. The model’s max Predictoor profit & trader profit were gained under the same circumstances: a Sigmoid calibration for max Predictoor profit & Isotonic for max trader profit, each with 1000 samples of BTC-USDT training data, and autoregressive_n = 2. The data is also in agreement with the effect of ETH-USDT data weighing profits down. Max Predictoor profit was 5932.50 OCEAN and max trader profit was $330.53 USD, showing that L1 & L2 regularization decreased Predictoor profit somewhat but improved trader profit compared to ClassifLinearLasso and ClassifLinearRidge.

7.6 ClassifLinearElasticNet_Balanced Data Table

The ClassifLinearElasticNet_Balanced data table shows that balancing did not improve either the Predictoor profit or trader profit of the ClassifLinearElasticNet model. The max Predictoor profit was 4369.96 OCEAN, and the max trader profit was $295.91 USD. Like the ClassifLinearLasso_Balanced & ClassifLinearRidge_Balanced models, the ClassifLinearElasticNet_Balanced model used None calibration and 1000 samples of BTC-USDT data with autoregressive_n = 2 to achieve these maximums. Generally, adding ETH-USDT data to the training dataset did not improve profitability.

8. Appendix: Details on Model Class Balancing 8.1 About Scikit-learn’s Balancing Algorithm

The models in this blog post are implemented with Scikit-learn’s LogisticRegression() function with the class_weight= “balanced” parameter. The balancing algorithm uses a specific formula to automatically adjust the weights of the classes based on their frequencies in the input data. The formula it uses is:

This adjustment helps to treat each class equally despite their differing sample sizes. In imbalanced datasets without such adjustments, the classifier might predominantly predict the majority class, ignoring the minority classes.

8.2 About the LBFGS Solver

The LBFGS solver (Limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm) is an optimization algorithm in the family of quasi-Newton methods. It approximates the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm using a limited amount of computer memory. The LBFGS solver was chosen for the ClassifLinearRidge model due to its efficiency in handling a large number of features.

8.3 About the SAGA Solver

The SAGA (Stochastic Average Gradient Descent Algorithm) solver is a variant of stochastic gradient descent that supports both L1 and L2 regularization. It combines the sparse gradient updates of the Proximal Gradient method with a variance reduction technique that accelerates the convergence of stochastic methods. SAGA is particularly effective in ML applications with high-dimensional feature spaces, and since it also supports L1 & L2 regularization, it was chosen as a solver for the ClassifLinearElasticNet model.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

Predictoor Benchmarking: The Effects of Balancing on Calibrated Linear Classifiers was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Evernym

Multi-Factor Authentication: How It Defends Against Threats and Why It Matters

Multi-Factor Authentication: How It Defends Against Threats and Why It Matters In an era where cyber... The post Multi-Factor Authentication: How It Defends Against Threats and Why It Matters appeared first on Evernym.

Multi-Factor Authentication: How It Defends Against Threats and Why It Matters In an era where cyber threats are becoming increasingly sophisticated, securing access to systems and data is paramount. Multi-factor authentication (MFA) has emerged as a critical tool in enhancing security by adding layers of protection beyond traditional passwords. By requiring ...

The post Multi-Factor Authentication: How It Defends Against Threats and Why It Matters appeared first on Evernym.


Indicio

How verifiable credentials disrupt online fraud, phishing, and identity theft

The post How verifiable credentials disrupt online fraud, phishing, and identity theft appeared first on Indicio.

By Ken Ebert

Everyone’s online life begins with a user account, a login, and a password, which combined, turns into an identity. I am my email address — or social media account login. For the past twenty five years, life online has evolved by accumulating these digital identifiers. The more we have, the more we can do online. 

We don’t really own these digital identifiers: they’re lent to us on the assurance that we are who we claim to be, via the personal information we provide. This information is stored in a database along with lots of other people’s personal data so that they, too, can have a digital identifier.

This is how we identify each other on a network that was designed to manage computer identity rather than personal or organizational identity. It’s been amazingly successful at allowing billions of people to exist and interact online. Unfortunately, what it hasn’t been amazingly successful at is preventing all those people from having their identities stolen or faked.

One anecdote may be familiar: you get an email “from your bank.” Due to suspicious activity, your account has been locked and you need to log on to unlock it. You login (but not you, because you’d never be fooled by this, right?) and…  it’s not your bank. Whoever it is you’ve just given your login details to can now access your real bank account. Ninety percent of successful data breaches are a result of successful phishing.

Or maybe it doesn’t have to be this sophisticated: your password is 1,2,3,4,5 — and Malicious Actors Inc guess their way into your account. Or you reuse the same password across accounts and a data breach for one of these accounts means multiple accounts are now accessible to hackers.

And not just you. Once into a database, every account is compromised. The whole defense collapses if one access point is compromised. 

Identity fraud can also be sophisticated, such as someone using generative AI tools to create a deepfake of your biometrics or those of your boss — and you give them 25 million dollars, thinking you’re following legitimate directions.

Yes, there are security solutions like multifactor authentication, but they can only do so much, given that the underlying architecture of ‘account logins-passwords-databases’ is so hard to defend. And many people dislike the friction they add to online interaction, which is already burdened by an endless cycle of forgetting and resetting passwords. I recently joined a Teams meeting where I had to receive an email with a PIN code, experience two biometric checks, and supply a two-digit code from my authenticator app. 

A digital transformation in how we share and verify data
Here’s what verifiable credentials and decentralized identity do: They remove the underlying problem of user accounts, logins, passwords.

Instead of authenticating a user account through a login and password, a user is authenticated with a verifiable credential and cryptography. 

What is a verifiable credential? Think of it like an envelope for sealing and sharing digital information. The source of the envelope (the organization issuing the credential) can be cryptographically verified. The information in the envelope is digitally signed, which, in essence, means that any attempt to alter or tamper with the information breaks the seal and can be detected.

But this is only one of the elements in the new authentication ‘stack.’

You can accept and share a verifiable credential because the software in your digital wallet has created an address for it to be sent to. This address — a decentralized identifier or DID — is under your control and you can prove this control cryptographically when you interact with another DID. 

The combination of a DID and a verifiable credential enable you to prove that you are in control of a specific identity, and you can now attach any data to that identity by writing it to a credential.

The upshot is that people hold their data, authenticate themselves and each other cryptographically, and share data that can be trusted because we can know it hasn’t been altered (assuming that we trust the original source of the data).

This is the instantaneous magic behind seamless digital travel. A person takes their physical passport and — providing it has a chip — reads the information from the passport and converts it into a digital credential. The software also requires the person to do a liveness check with a selfie and then compares the selfie with the digital image from the passport chip. The passport data is authenticated as having come from a legitimate passport-issuing authority and the person is issued with a Digital Travel Credential (DTC) by an airline.

When a DTC is presented (touchlessly), the source of the DTC is instantly authenticated, along with the integrity of the data in the DTC. Additional biometric authentication and, of course, biometric access to the device, provide further confidence that the person presenting the DTC is the holder of a legitimate passport. 

The result is portable trust. Verifiable data can go from anywhere to everywhere — and so can you.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post How verifiable credentials disrupt online fraud, phishing, and identity theft appeared first on Indicio.


KuppingerCole

Oct 17, 2024: IAM meets ITDR: A Recipe for Robust Cybersecurity Posture

In today's digital landscape, identity is at the forefront of enterprise security. With a growing number of cyberattacks originating from compromised identities, organizations must adopt an identity-first security approach. This approach emphasizes proactive measures over reactive responses, crucial for minimizing risks and safeguarding sensitive information.  
In today's digital landscape, identity is at the forefront of enterprise security. With a growing number of cyberattacks originating from compromised identities, organizations must adopt an identity-first security approach. This approach emphasizes proactive measures over reactive responses, crucial for minimizing risks and safeguarding sensitive information.  

Sunday, 18. August 2024

KuppingerCole

Eight Recommendations for CISOs in 2025

In this episode of the KuppingerCole Analyst Chat, host Matthias Reinwarth is joined by Annie Bailey, Research Strategy Director at KuppingerCole Analysts, to discuss the key trends that will shape the cybersecurity landscape through 2025. The conversation explores the increasing complexity of the attack surface, the growing importance of resilience and recovery in cybersecurity strategies, and th

In this episode of the KuppingerCole Analyst Chat, host Matthias Reinwarth is joined by Annie Bailey, Research Strategy Director at KuppingerCole Analysts, to discuss the key trends that will shape the cybersecurity landscape through 2025. The conversation explores the increasing complexity of the attack surface, the growing importance of resilience and recovery in cybersecurity strategies, and the dual role of AI as both a threat and a defensive tool. In addition, the discussion covers the impact of emerging regulations, the need for advanced cybersecurity infrastructure, and how organizations can prepare for the anticipated challenges ahead.



Friday, 16. August 2024

Spruce Systems

SpruceID Joins Harvard and Microsoft Researchers for New “Personhood Credential” Proposal

Empowering humans is the best way to fight a coming wave of A.I.-powered fraud and disinformation.

Last week, Wayne Chang (CEO of SpruceID) and a broad coalition of researchers from Harvard, Microsoft, MIT, the Decentralized Identity Foundation (DIF), and other organizations released a major new proposal for fighting online disinformation and fraud. The proposed solution is a digital credential that would give internet users a powerful new tool for proving their authenticity online, while also ensuring strong privacy.

Our new paper proposes a “personhood credential,” or PHC, based on much the same cryptography-based digital credential technology that powers SpruceID’s mobile driver’s licenses in California and elsewhere. Much like SpruceID’s mDL deployments, the PHC system would reveal only the minimum necessary information about any user: in this case, simply that they are a human, not a bot or AI agent. The PHC would not disclose any identifying information, and is also designed to prevent cookie-like traceability. 

The credential would be an optional tool, primarily for specific users who want to establish a high level of credibility online while protecting their privacy, and for service providers who want to reduce fraud.

Why We Need to Prove Personhood Online

One major goal of the PHC is to distinguish authentic content on social media from deepfakes, coordinated manipulation, and other automated activity. Worries about inauthentic content online have been high for close to a decade now, but the recent advent of generative AI models, including their ability to mimic specific individuals on video, has created an even higher-risk environment for disinformation [link to fake election content piece].

Proving authenticity on the internet is difficult for technical reasons, and no truly good solution has ever emerged. That’s one reason online financial fraud and identity fraud have steadily accelerated, now costing individuals and institutions tens of billions of dollars annually. The rise of AI generated content, meanwhile, has triggered worries of a “dead internet” full of robots talking endlessly to one another.

A digital credential to demonstrate personhood could combat both disinformation and fraud, mitigate against denial-of-service attacks using automated “botnets,” and empower individuals to prove their authenticity–even if they wish to remain anonymous.

Harnessing the Power of Encryption for Online Authentication

The proposed new PHC system is fundamentally user-controlled. Among other features, that means:

1. The PHC is optional for all users.

2. It cannot reveal real-world identities.

3. Users can choose their PHC issuer.

Optionality: While any natural person could request and receive a PHC, a PHC would not (and in fact could not) be required to use the internet. Specific high-security websites or online services, such as banking portals, may choose to require the PHC as an anti-fraud measure. More generally, we expect PHC use and adoption to be driven from the bottom up by users who wish to prove their authenticity.

Anonymity and Pseudonymity: Crucially, the system is designed to prove only that the holder is a person, without transmitting any specific data, such as name, credit card, birth date, or location. This is possible because issuers confirm an applicant’s authenticity offline, then issue an anonymized PHC credential.

The digital credentials themselves are validated and secured by encrypted signatures. Related techniques are used to ensure that even these signed credentials are “unlinkable” – that is, that a user’s online activity cannot be tracked or collated.If the user desires, however, the PHC could also be used to preserve a single user identity over time.

Issuer Choice: Personhood credentials are issued and signed by an open network of PHC issuers, with measures to prevent the issuing of multiple credentials to a single person. The open issuer network ensures no issuer is able to abuse their power, for instance by limiting the uses a PHC is put to, or selecting who is eligible to receive one.

The Open PHC Issuer Network

It may seem counterintuitive that a proof of personhood credential can be trusted to a totally open network of self-selected issuers. While there are challenges and tradeoffs, we and our research coalition believe such a system strikes a balance: preserving democratic openness, while harnessing market dynamics to elevate the most trustworthy PHC issuers.

The alternative, restricting issuance only to already “trusted” issuers, would both restrict public access to the PHC credential, and create a “single point of failure” for the broader system. Potential failure conditions for a restricted-issuer system would include compromise by external hacking or internal subversion, such as the use of DMV staff privileges to gain unauthorized data access. Even worse, though, is the potential emergence of a “ministry of information” under which issuers control how PHCs are used to validate online content. 

To prevent those outcomes, the PHC credential must be available from a variety of sources. Different issuers will have different standards and procedures for proving user authenticity. These could range from government-issued identity documents and an in-person interview, to versions of decentralized identity relying on digital proofs of interactions like shopping and messaging, documented using digital proofs that can’t be faked by artificial intelligence.

By the same token, services seeking to validate humanity would be free to choose which issuers’ credentials to accept, unleashing competitive dynamics that would motivate provision of PHC services tailored for a variety of applications and users. For instance, a bank might require a PHC issued by a government entity, while a social media site could accept a less rigorous PHC. 

One challenge of the open issuer network is the risk that multiple issuers would issue PHCs to the same natural human, potentially allowing those additional credentials to be misused. This risk is still being tackled by researchers, but the possibility of multiple issuance still represents a significant improvement from the current, unlimited ability of bad actors to impersonate humans online.

Above all, the open nature of PHC issuance would prevent the accrual of more power to governments, providing a free-market alternative to governmental “ministries of truth” exercising anti-democratic information control.

Proving Humanity and Protecting the Information Commons

The internet is reaching a crisis point thanks to the continuing rise of spam, fraudulent content, data leakage, and hacking. The adoption of the PHC credential would benefit the entire digital information and security ecosystem, not merely those who hold or accept the credential.

The PHC would immediately distinguish authentic online content and interactions from automated manipulation, improving the online experience for many users without their own PHC. That’s both because the most authentic content would be easy to spot, and because the very existence of this new form of verification would disincentivize the creation of misleading content.

The PHC would provide this benefit without adding more personal data to “data hoards” likely to be targeted by hackers. Indeed, it’s these very large-scale hacks, such as the recent theft of 3 billion records, including government ID numbers, that are rapidly rendering “knowledge based” security measures obsolete, and better approaches necessary. In this compromised environment, adding the PHC as an access control tool for sensitive online applications would have a substantial impact on hacking and fraud.

For now, the personhood credential is a general proposal, with much work remaining both in designing the overall system and creating specific technical implementations. That means its benefits are still some time in the future, but the online fraud and disinformation it aims to address isn’t going anywhere – if anything, the situation seems poised to get worse. 

SpruceID is proud to have a hand in this major new proposal, and we’ll be contributing our expertise in identity, privacy and encryption to help bring it to fruition. If you see potential for the PHC to strengthen your organization’s digital efforts, please reach out – we’d be excited to learn about your needs, and help you prepare for a more authentic online future.

Read the Full Paper

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions. Learn more on our website.


Dock

The EU Digital Identity Wallet: A Beginner's Guide

With the approval of eIDAS 2, 400 million EU citizens will soon have a EU Digital Identity Wallet containing legal credentials issued by their national governments.  The shift from physical documents to digital IDs is one of the most significant changes in identity history. This evolution requires

With the approval of eIDAS 2, 400 million EU citizens will soon have a EU Digital Identity Wallet containing legal credentials issued by their national governments. 

The shift from physical documents to digital IDs is one of the most significant changes in identity history. This evolution requires ID companies to adapt, innovate, and reimagine the possibilities of digital verification.

The EU Digital Identity Wallet provides a secure and versatile storage for digital credentials. It aims to simplify digital interactions across borders while ensuring interoperability and user control.

In this post, we cover the details of the EU Digital Identity Wallet, including its features, benefits, and applications, so that you gain a comprehensive understanding of it.

Let's dive in: https://www.dock.io/post/eu-digital-identity-wallet


Civic

Tokenized Identity: Permissioned vs Permissionless Assets on Solana with Austin Federa, Solana Foundation

In this episode of Tokenized Identity, Titus Capilnean, our VP of Go-To-Market, speaks with Austin Federa, Head of Strategy at Solana Foundation. They explore the world of permissioned and permissionless assets on Solana, when builders need to move the dial towards adding restrictions to comply with real-world regulations and how this can bring more web2 […] The post Tokenized Identity: Permissi

In this episode of Tokenized Identity, Titus Capilnean, our VP of Go-To-Market, speaks with Austin Federa, Head of Strategy at Solana Foundation. They explore the world of permissioned and permissionless assets on Solana, when builders need to move the dial towards adding restrictions to comply with real-world regulations and how this can bring more web2 […]

The post Tokenized Identity: Permissioned vs Permissionless Assets on Solana with Austin Federa, Solana Foundation appeared first on Civic Technologies, Inc..


Dock

Dock implements BBS as the default signature algorithm in the Anonymous Credentials format

Technology standards are always changing, and it can be expensive for products to keep up. The rate of change is even faster for new technologies with emerging standards, such as the standards for verifiable credentials that are used to create reusable digital identities. Our customers don’t have to

Technology standards are always changing, and it can be expensive for products to keep up. The rate of change is even faster for new technologies with emerging standards, such as the standards for verifiable credentials that are used to create reusable digital identities. Our customers don’t have to worry because our APIs hide the changes in the underlying credential standards. During the April 2024 Internet Identity Workshop, Kazue Sako from Waseda University provided an update on recent developments in BBS cryptography which serves as a good example of the complexity hidden by our products.

Dock’s Anonymous Credentials use an advanced cryptographic signature algorithm that was invented in 2004 and is known as BBS. BBS signatures support advanced privacy capabilities like unlinkable selective disclosure, while also being faster and smaller than other signature algorithms with similar capabilities. However, when BBS was originally proposed no one knew how to mathematically prove the security of the algorithm. Various modifications were made to BBS signatures to make it easier to prove their correctness, and in 2016 a version of the algorithm called BBS+ proved to be efficient enough to be widely used in verifiable credentials. We used BBS+ signatures when we first implemented our Anonymous Credentials format.

A paper published in 2023 includes a proof for the original BBS algorithm while also proposing some efficiency improvements compared to the BBS+ approach to verification of signatures with selective disclosure. Now that BBS signatures are known to be correct, we can use them instead of the BBS+ variant and benefit from the reduced computation requirements. The 2023 variant of BBS replaced BBS+ as the target of standardization at the IETF. We implemented support for BBS2023 last fall, and recently made it the default signature algorithm in the Anonymous Credentials format. This change is transparent to our customers who now use the best version of the algorithm when issuing new credentials while we also ensure that existing credentials remain verifiable.

As you follow our release notes and roadmap updates, you’ll see additional examples of how we track the evolution of identity technologies so that our customers don’t have to.


Gartner Rebuttal: Why Decentralized ID can improve KYC Compliance

In Gartner’s recently released 2024 Market Guide for Decentralized Identity, they suggest that organizations looking to improve their compliance processes with decentralized identity technologies should adopt a skeptical stance. They say: A significant number of vendors claim to have the functionality within their DCI solution to comply with

In Gartner’s recently released 2024 Market Guide for Decentralized Identity, they suggest that organizations looking to improve their compliance processes with decentralized identity technologies should adopt a skeptical stance. They say:

A significant number of vendors claim to have the functionality within their DCI solution to comply with KYC and AML regulations. DCI vendors see this as crucial for making KYC and AML compliance processes more efficient. However, Gartner’s view is that, at this time, banks cannot make a good business case for transitioning away from their traditional compliance process, regardless of its inherent challenges.

At Dock Labs, we regularly speak with organizations who are unhappy with the costs and pains associated with KYC and AML compliance. These forward-thinking organizations find that reusable identity credentials provide them with essential tools to lower the costs of verifying individuals, and improve the experience of the users onboarding to their systems. They get these benefits without increasing fraud or compliance risk while simultaneously improving their compliance with privacy requirements and reducing the cost of protecting user data.

The difference in perspective is that these innovative organizations don’t see DCI as a replacement for existing compliance processes, but as new tools that can augment what is working now. With verifiable credentials as part of their toolbox, IAM practitioners can assemble a better solution than can be obtained solely with traditional compliance processes.

For example, think about opening a savings account online. You will likely be required to follow a traditional approach to compliance which requires a number of steps to verify your identity:

Take a picture of your national identity document and a selfie in order to validate your legal name. That legal name must then be checked against a watchlist of sanctioned people. You will then be asked to enter your mailing information, which will be validated with an address service. You then have to enter a phone number which will be verified by sending you a text message that you must enter into the web site. You will also be asked to enter an email address, which will be verified by sending you a link that you have to click on.

At this point you can finally set up your account. After recently completing this process with a family member, we were offered the opportunity to open a credit card with a partner bank. But we gave up when we found that we would need to go through the whole process again.

I wished that the savings bank would have issued us a credential that would be accepted by the partner bank showing that our legal name, tax number, mailing information, phone number, and email address had already been validated. Accepting the data through a credential would have saved us the hassle of data entry and re-validation, while also ensuring that the partner bank is only using data that has been verified by a trusted source according to the rules of their partnership agreement.

It is true that using credentials does not remove the partner bank’s duty to record their basis for trusting the information. Particularly sensitive checks, such as the watchlist check, may need to be repeated. The referring bank may also charge a fee for the use of the identity credentials that they issued. Regardless, the credential-enabled process is much less painful for everyone involved.

Even Gartner acknowledges that decentralized identity technologies can help streamline regulatory compliance. We wholeheartedly agree with the advice they give near the end of their report, when they say:

Although regulations were initially expected to erect barriers to the adoption of DCI in heavily regulated industries like financial services, new DCI use cases allow organizations to comply with them. SRM leaders should explore how DCI can enable them to comply with regulations more easily, privately, and securely than conventional means.

We at Dock Labs are happy to help organizations stay ahead of their competitors by improving their KYC and AML compliance today.


PingTalk

Session Hijacking - How It Works and How to Prevent It

Learn about session hijacking, detection methods, and prevention techniques to safeguard your digital assets.

A session hijacking attack is one of the more common ways in which malicious actors can commit fraud. It allows black hat hackers to completely bypass secure authentication mechanisms, including multi-factor authentication (MFA) and others. This, in turn, grants access to a user’s secured accounts and systems, which can give attackers free reign to steal sensitive data. These types of attacks pose a serious threat to cybersecurity, both on an individual and organizational scale. The ramifications can include extensive financial losses and long-term damage to an organization’s reputation.

 

You may not be able to prevent your organization from being targeted by session hijacking attacks, but there are steps you can take to recognize these attacks and stop them in their tracks. Keep reading to explore the hallmarks of session hijacking, the various ways it can be attempted, and the prevention methods you can deploy to protect your users and your business.


BlueSky

Highlighting Community Starter Packs

Join a starter pack today!

In June, we released starter packs — personalized invites that allow you to bring friends directly into your slice of Bluesky.

Check out and join some of the starter packs that the Bluesky community has created!

I've made a start, only a few here so far so will keep searching - but if anyone knows any UK MPs I've missed let me know and I will add go.bsky.app/FACCR8t #ukpolitics

[image or embed]

— Geoff (@geoffdeburca.bsky.social) Aug 13, 2024 at 2:31 AM

New here and like comics? Well @gregpak.bsky.social has you covered! Here are two starter sets of folks to follow! First a bunch of creators go.bsky.app/R4eqmGf

[image or embed]

— Adam P. Knave (@adampknave.com) Aug 13, 2024 at 7:44 AM

I have made a ChemSky starter pack and am posting here to help boost visibility. This list is not exhaustive, but should hopefully help newcomers or rejoiners find some accounts and feeds to follow go.bsky.app/C9BtrLj

[image or embed]

— Laura Howes (@laurahowes.bsky.social) Aug 15, 2024 at 11:44 AM

I made a starter pack for those fleeing #EduTwitter and joining #EduSky which should let you find a bunch of good people. go.bsky.app/HQHD4R1

[image or embed]

— Caroline Spalding (@mrsspalding.bsky.social) Aug 15, 2024 at 6:34 AM

Calling all folk with an interest in UK public policy: I’ve created a starter pack of think tankers, policy analysts & commentators active on @bsky.app go.bsky.app/LtNiL1o

[image or embed]

— Jessica Studdert (@jesstud.bsky.social) Aug 14, 2024 at 7:46 AM

starter pack of OC artists who are under 100 followers at the time of making this list! 🩷 go.bsky.app/6LGDx5g

[image or embed]

— Saba 🏳️‍🌈 (@ace-of-dragons.bsky.social) Aug 14, 2024 at 11:36 AM

Starter pack for #nufc fans here. go.bsky.app/HmjNT4

[image or embed]

— Kev Lawson (@editkev.football) Aug 11, 2024 at 2:09 PM

I love this starter pack business, so I've made one of some of the women I follow on here (including the estate of Ursula K Le Guin because I'm obsessed). I'm sure I'm missing a ton of great people. Anyone else I should include? go.bsky.app/2rubRr3

[image or embed]

— Alona Ferber (@aloner.bsky.social) Aug 15, 2024 at 6:21 AM

Starter Pack for Seismology and Earthquake people. Add missing accounts in the comments and I'll add them to the pack! ⚒️🧪 #Geology go.bsky.app/ND4oS9k

[image or embed]

— Henning ⚒️ (@geohenning.bsky.social) Aug 12, 2024 at 11:28 AM

Find more communities directly on Bluesky! See you there: bsky.app.

Thursday, 15. August 2024

Spruce Systems

Navigating the Jungle of Digital Credential Standards

SpruceID's multi-standard approach to digital identity credentials prioritizes user convenience, privacy, security, and sustainability, ensuring long-term functionality and adaptability.

The ongoing transition towards digital identity credentials will have many benefits for users and society, from increased privacy to preventing disinformation. The first form of digital credential that’s reaching the public is the mobile driver’s license, currently being piloted by several U.S. states. But there are many other potential digital credentials, from professional licenses and degrees to simple event passes, each with its own nuances. 

The builders architecting these systems, often from the ground up, face a challenge: choosing the right technical standard for presenting data. Standards enable the open, interoperable nature of digital identity systems, making sure potentially countless credential issuers, holders, and verifiers overseeing a huge variety of digital credentials are all on the same page. 

Digital credentials will eventually include not just driver’s licenses but more niche certifications from food handling to off-road vehicle training to professional affiliations. Agents handling related credentials will have to speak the same language – that is, use the same data standard – to interact in a smooth and trustworthy way. Email is another technology that runs on a shared data standard, which is why it can be sent from a Gmail account but still be readable via Hotmail or any other email service. 

For better or worse, though, the world of digital credential standards is already wildly fragmented. For instance, there are already no fewer than two digital formats to verify educational credentials: OpenBadges and the European Digital Credential. A recent report from the European Union Agency for Cybersecurity (ENISA) describes six different formal standards for digital identity credentials, among them the International Organization for Standards’ (ISO) Mobile Driver’s License standard (mDL); standards under the EU’s eIDAS authority; and both OpenID and FIDO2 formats for online identity and security. And that’s just the tip of the iceberg. 

The choice of standard will also be shaped by the scope and nature of a project: Standards can be built for very specific and similar purposes, or they can be generalized and overlapping. Further, while some standards will grow into thriving ecosystems, others may fall to the wayside, just like the Betamax videotape standard. These and other factors can make choosing the right credential standard to build a system feel simultaneously very important and difficult.

But at SpruceID, we’ve taken a different approach to the stressful quandary of digital credential standards. Rather than choosing one standard to build our tools around, we integrate multiple standards that meet our goals for user convenience, privacy, sustainability, and security. This ensures our customers get what they need today and that our systems will still be functional tomorrow—even in the (unlikely!) event that we’re not around to maintain them.

Real Results Beat Abstract Superiority

The biggest pitfall when evaluating standards is trying to decide which one is the “best,” whether for your application or in general. The truth is that even if one technical roadmap offers clear advantages over another, parallel questions such as adoption rates and integrations can trump those concerns. The technically superior standard simply doesn’t always win—just ask BetaMax, which lost the fight with VHS despite being better in every way.

So, instead of looking for some abstract “best,” here at SpruceID, we focus on whether each standard adequately provides four things: utility, privacy, security, and sustainability. Our systems integrate multiple standards that fulfill those needs and let users issue, manage, or verify credentials in all the supported standards formats.

Privacy, in particular, is a major motive for the overall shift to digital identity, which opens up new possibilities for users to control information about themselves carefully. Our North Star is scholar Helen Nissenbaum, who emphasizes the importance of social context for our sense of privacy. Older, analog forms of identity could ‘leak’ information in the wrong context, and some early digital ID systems could reveal too much data about a user’s activities to credential issuers. 

But good digital identity standards give users control over precisely what data they’re sharing and when – including protecting them from uninvited monitoring, even from state authorities. Standards that protect user privacy and enable selective disclosure include ISO’s mDL standard and the World Wide Web Consortium (W3C) Verifiable credentials format.

Similarly, standards must allow secure implementations. That doesn’t just mean that their cryptographic verification processes are sound—that’s important but relatively straightforward to assess. More subtle risks can lurk in how a standard shapes the storage and sharing of data: as an extreme example, fully centralized identity databases present serious risks to users' privacy. 

It’s worth noting here that there’s a nuanced relationship between all these standards and their even more varied implementations – that is, the actual code and systems that use the standards. It’s not hard to take a potentially secure and private identity standard and build a system around it that undermines those virtues, but our multi-standard strategy is focused on the core architecture and making sure our own tools implement it in the best possible way.

We Can Rebuild it. We Have the Standards.

The third minimum requirement for a standard to pass muster at SpruceID is that it offers inherent resilience. Above all, this means that it doesn’t depend on any one technology operator to keep functioning and that even if our own front-end system were to vanish, users would still be able to use and trust the same credentials they had been using with SpruceID. From this perspective, a counterpart to resilience is scalability.  That is, how easy it is for a new actor to adopt the standard and provide services using it – including filling gaps that might appear if other ecosystem players were to go away.

If a digital credential system is a network that carries information, you can think of it like a 19th-century railroad. It’s made up of trains and conductors and rails and stations - things you can see and touch. But it’s also made up of standards that underpin all that hardware - technical standards like track width and signaling technology and standardized ways the railroad is scheduled and operated. 

In the old days, railroads competed fiercely, and the utility, depth, and trustworthiness of those standards, including how well they allowed different systems to interact, played a big role in which railroads survived. Railroads with strong standards would be more likely to work well with other systems and make it easier for new operators to rebuild, bailout, or take over. To pick the most obvious example, a railroad that decided its tracks would be twelve inches wide when locomotive manufacturers were churning out dining cars for tracks four feet across would be far less resilient or scalable because of that choice.

Our approach to standards is based on the idea that things can cut in the other direction, as well: If one standard disappears or loses relevance, our systems will still have a second set of rails built to other workable standards. This is a key advantage of implementing multiple standards through one tool.

But the truly big unlock is the peace of mind of not having to worry too much about which standard is “best,” maybe even before you even know what your customers and users will need.

Our priority is answering those specifics and making sure our implementation translates a general format into the best possible user experience. This includes assurances that their data is safely in their control and will be useful for the long haul, regardless of which invisible data formats win the long-term digital identity race.

Want to learn more or discuss your specific use case? Contact us to continue the conversation.

Get in Touch

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Indicio

Senior DevOps Engineer (Remote)

Work with the Director of Sales to support him in day-to-day responsibilities including... The post Senior DevOps Engineer (Remote) appeared first on Indicio.

Senior DevOps (Remote)

 

Job Description

 We are the world’s leading verifiable data technology. We bring complete solutions that fit into an organization’s existing technology stack, delivering secure, trustworthy, verifiable information. Our Indicio Proven® flagship product removes complexity and reduces fraud. With Indicio Proven® you can build seamless processes to deliver best-in-class verifiable data products and services.

As a DevOps Engineer, you will play a crucial role in bridging the gap between development and operations. You will be responsible for designing, implementing, and managing our cloud infrastructure, building pipelines, and deployment strategies. Your expertise in Linux system administration, containerization, and cloud platforms will be vital in maintaining efficient and scalable development environments.

As a rapidly growing startup we need team members who can work in a fast-paced environment, produce high quality work on time, work without supervision, show initiative, innovate, be laser focused on results, and have outstanding communication skills. Indicio is a fully remote global team (our Maryland colleagues have a co-working space) and our clients are located around the world. You will create lasting impact and see the results of your work immediately. 

Responsibilities

Infrastructure Management: Design, deploy, and manage cloud infrastructure on AWS and GCP. Provision cloud resources and ensure the scalability, reliability, and performance of our systems. Build and Deployment Pipelines: Develop and manage build pipelines using tools like Jenkins, Github Actions, GitLab CI/CD, or similar. Ensure automated and reliable software delivery processes. Autoscaling and Monitoring: Implement auto scaling solutions to handle varying workloads. Set up and manage logging and monitoring infrastructure to ensure system health and performance. Development Support: Collaborate with development teams to manage and optimize development environments. Assist in debugging by gathering and analyzing data from various sources. Participate in incident management and resolution. Documentation and Best Practices: Create and maintain documentation for infrastructure and deployment processes. Advocate for and implement best practices in DevOps and continuous integration/continuous deployment (CI/CD). Qualifications Linux System Administration: Strong experience in Linux system administration, including configuration, troubleshooting, and performance tuning (required) Containerization: Proficiency with Docker and container orchestration platforms (required). Build Pipelines: Experience with CI/CD tools and building automated pipelines (required). Version Control: Proficiency with Git for version control (required). Cloud Platforms: Hands-on experience with AWS and/or GCP, including provisioning and managing cloud resources (required). Autoscaling: Knowledge of autoscaling mechanisms and strategies (required). Logging and Monitoring: Experience with logging and monitoring tools (e.g., ELK stack, Prometheus, Grafana) (required).

Apply today!

 

The post Senior DevOps Engineer (Remote) appeared first on Indicio.


KuppingerCole

From Directive to Action: The Value of Draft Documents in Navigating the NIS2 Compliance Challenge

by Matthias Reinwarth Organizations across Europe are in the midst of a challenging process—implementing the requirements of the NIS2 Directive. This EU-wide cybersecurity legislation, which took effect on January 16, 2023, demands significant and broad-ranging compliance efforts. However, a key obstacle remains: the EU, or more specifically the member states who need to translate this into natio

by Matthias Reinwarth

Organizations across Europe are in the midst of a challenging process—implementing the requirements of the NIS2 Directive. This EU-wide cybersecurity legislation, which took effect on January 16, 2023, demands significant and broad-ranging compliance efforts. However, a key obstacle remains: the EU, or more specifically the member states who need to translate this into national legislation, have yet to provide detailed guidance on what organizations must do to comply. This ambiguity leaves much to interpretation, creating a fertile ground for third-party recommendations and, perhaps, confusion.

The Countdown to Compliance

NIS2 affects a wide range of companies, far more than its predecessor, the original NIS Directive. The clock is ticking, with the October 18, 2024, deadline for implementation drawing near. The problem? NIS2 requires that all member states integrate its provisions into their national cybersecurity laws, a process that is still incomplete in several countries. While some, like Germany, are nearing the final stages of this legislative adoption, the lack of detailed guidance has led many organizations to mainly rely on established control frameworks to meet the directive's broad requirements. And the NIS2 policy makes several references to the ISO27000 family of documents, for example, as a source of best practice.

Many EU member states are still not fully prepared, and the directive’s broad, somewhat generic provisions—such as those in Article 21—leave organizations guessing what specific measures to take. Companies must adopt an all-hazards approach, addressing everything from risk analysis and incident handling to business continuity and supply chain security. Yet, the details of what this entails are sparse.

One exception to this lack of specificity is the requirement for multi-factor authentication (MFA), which NIS2 explicitly mentions. Beyond that, however, companies are left to navigate a landscape of general directives, hoping that their interpretations will suffice.

A Glimmer of Guidance

Amid this uncertainty, a notable development has quietly emerged. The European Commission recently published a draft Implementing Regulation (IR) that could bring much-needed clarity - though only for a narrow subset of entities within the digital infrastructure sector, such as cloud computing providers, DNS service providers, and online marketplaces. The draft includes an Annex with detailed controls, providing a level of specificity that many have been craving.

For example, in the realm of Identity and Access Management (IAM), where Article 21 (2) of NIS2 vaguely calls for “access control policies,” the Annex goes much further. It dedicates three full pages to detailed, actionable requirements. These include the need to establish and implement logical and physical access control policies for network and information systems, addressing access by people and processes, and ensuring access is granted only after proper authentication. The document demands the regular review and update of these policies, and the management of access rights based on principles like least privilege and separation of duties. It even specifies requirements for privileged accounts, system administration systems, and the life cycle management of identities, including secure authentication procedures.

And there is much more, as this was just a single example.

Article 3 of the main draft document defines basic criteria for identifying "significant incidents". And who was not looking for such a definition (although even those could be clearer)?

Want more? Chapter 3 of the Annex provides another three pages of incident management controls from establishing a comprehensive incident handling policy (including clear roles, responsibilities, and procedures for detecting, analyzing, responding to, and reporting incidents) to post-incident reviews.

This level of detail, while only applicable to the given list of specific sectors when approved, provides a solid foundation for organizations preparing for NIS2 compliance. It offers a glimpse into what might become the standard for other industries and states as well.

The Broader Implications

If approved and finalized, this draft regulation will only apply to certain sectors, but it's easy to see how it could serve as a blueprint for broader national legislation. The specificity it offers contrasts sharply with the general nature of NIS2 itself, making it a valuable resource for organizations seeking to align with the directive’s requirements. Indeed, as more countries finalize their national implementations of NIS2, it is likely that they will look to this draft IR as a model for their own regulatory frameworks.

However, it’s again important to note that this regulation is still in draft form. And it will only directly affect multinational organizations in the digital infrastructure sector that would otherwise fall through the regulatory cracks. But even in its current state, not yet applicable and with limited scope, the draft IR makes sense. It’s a step toward the clarity and guidance that practitioners - especially those on the front lines of cybersecurity - desperately need.

A Practitioner’s Take

As someone who is not a lawyer but looks at these regulations from both an analyst's and a practitioner's perspective, I see the value in any document that offers a reasonable level of detail. For organizations struggling to prepare for NIS2, the draft IR’s specificity provides a welcome roadmap. It’s likely that we’ll see elements of this document adopted more broadly, shaping the way national legislations and implementation procedures evolve.

In the meantime, organizations might want to keep a close eye on developments around this draft IR. While it is - yes - still a draft and a future applicability will be limited, the clarity it offers could soon extend to a much wider audience, helping to dispel some of the uncertainty surrounding NIS2 compliance.


Elliptic

As the US election nears, AI political deepfake scams are targeting crypto users

Crypto has taken a prominent stage in the US election campaign, with Donald Trump and Robert F. Kennedy Jr. attending Bitcoin 2024 in Nashville and Kamala Harris reportedly set to soften her stance on blockchain technologies. This increasing interest in the benefits of crypto, and how it can be safe and accessible to everyone, is welcome. As with any major event or new technology – be

Crypto has taken a prominent stage in the US election campaign, with Donald Trump and Robert F. Kennedy Jr. attending Bitcoin 2024 in Nashville and Kamala Harris reportedly set to soften her stance on blockchain technologies.

This increasing interest in the benefits of crypto, and how it can be safe and accessible to everyone, is welcome. As with any major event or new technology – be it elections, pandemics, conflict or AI – a small minority of illicit actors will nevertheless seek to capitalize on these developments to defraud innocent victims out of their funds.

Amid an increase in election-related scam activity, Elliptic advises both new and experienced crypto users to be vigilant of suspicious deepfake videos and investment opportunities – as well as familiarizing themselves with their red-flag indicators.

Latest identified scams indicate that fraudsters are exploiting Trump’s Bitcoin 2024 speech, his nomination of crypto-friendly running mate JD Vance and Elon Musk’s recent endorsement as a means of luring interested individuals into “get rich quick” schemes. Scammers are using AI-generated deepfakes to manipulate speeches of individuals such as Trump and Musk to depict them as promoting fake crypto investment sites.

As the Democrats launch a “Crypto for Harris” initiative, it is possible that the Vice President’s likeness may also be exploited by deepfake scammers throughout her campaign.

Elliptic has recently published a report into AI-enabled crime in the cryptocurrency ecosystem. Download your copy here.


Ontology

Apple Opens NFC Chip

Implications for Decentralized Identity and Contactless Technology Apple has announced a significant change in its approach to Near Field Communication (NFC) technology on iPhones. Starting with iOS 18.1, Apple will open up the iPhone’s NFC chip and Secure Element to third-party developers, allowing for contactless transactions outside of Apple Pay and Apple Wallet. This move has far-reachi
Implications for Decentralized Identity and Contactless Technology

Apple has announced a significant change in its approach to Near Field Communication (NFC) technology on iPhones. Starting with iOS 18.1, Apple will open up the iPhone’s NFC chip and Secure Element to third-party developers, allowing for contactless transactions outside of Apple Pay and Apple Wallet.

This move has far-reaching implications for the future of digital identity and contactless technology.

Decentralized Identity and NFC

The opening of Apple’s NFC chip aligns closely with the principles of decentralized identity, a framework that gives individuals control over their personal data and identity verification. With this new development, developers can create applications that leverage NFC technology for various identity-related purposes, including:

Digital IDs and passports Corporate badges and student IDs Hotel room keys and home access Loyalty programs and event tickets

This shift towards decentralized identity solutions using NFC technology could revolutionize how we manage and verify our digital identities in both online and offline environments.

Impact on Contactless Payments

The opening of Apple’s NFC chip will create new opportunities for contactless payments. Banks and other financial services can now develop their own NFC-based payment solutions, potentially increasing competition in the mobile payments space. This could lead to more innovative payment options for consumers and businesses alike.

Implications for Developers and Businesses

Developers will need to enter into commercial agreements with Apple and pay associated fees to access the NFC and Secure Element APIs. This new capability will be available in several countries, including Australia, Brazil, Canada, Japan, New Zealand, the UK, and the US.

For businesses, this change opens up new possibilities for customer interaction and service delivery. From seamless check-ins at hotels to enhanced loyalty programs, the potential applications are vast.

The Future of Digital Identity

As we move towards a more digitally integrated world, the combination of NFC technology and decentralized identity principles could pave the way for more secure, user-controlled digital identities. This aligns with initiatives like the EU Digital Identity Wallet, signaling a broader shift in how we manage and verify identities in the digital age.

In conclusion, Apple’s decision to open up its NFC chip represents a significant step towards a more open and interoperable ecosystem for digital identity and contactless technology. As this technology evolves, we can expect to see innovative applications that enhance security, privacy, and user convenience across various sectors.

Apple Opens NFC Chip was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Okta

Approaches to keep sending OTP over SMS... for now

Table of Contents Approaches to keep sending OTP over SMS… for now SMS/Voice is too SIMple Hooked on telephony Which regions? How many messages? How reliable? From you or Okta? How secure? How many people? Designing a DIY Hook Handling failover to Okta Vendors Telephony providers

Table of Contents

Approaches to keep sending OTP over SMS… for now SMS/Voice is too SIMple Hooked on telephony Which regions? How many messages? How reliable? From you or Okta? How secure? How many people? Designing a DIY Hook Handling failover to Okta Vendors Telephony providers Consultants Services What Next? Approaches to keep sending OTP over SMS… for now

“SMS has long played an important role as a universally applicable method of verifying a user’s identity via one-time passcodes. And over the last decade, SMS and voice-based Multifactor Authentication has prevented untold attempts to compromise user accounts.

But it’s time to move on.”

– Ben King, VP Customer Trust: BYO Telephony and the future of SMS at Okta

SMS/Voice is too SIMple

The one-time passcode (OTP) you send using SMS or Voice may not go to the phone you want. SIM swapping–stealing someone else’s phone number–lets bad actors receive the message or call with the code. They’re one step closer to breaking into your system. And if all it takes is an account name and OTP, they may succeed. And it’s not just SIM hacking; other issues include:

No phishing resistance

No control of the channel for sending secrets

No way to link a user to their device

Longer login times than other methods

Okta recommended moving away from SMS/Voice authentication some time ago. There are many other factors you can use for authentication, including:

Generating codes in an authenticator app such as Okta Verify, Authy, Google Authenticator, or 1Password.

FIDO2.0 (WebAuthn) which, in addition to phones, can use hardware keys and on-device authenticators.

Soon, Okta will require you to bring your own telephony provider to keep sending those codes. If you need time to move to a different method of verifying identity, you must configure your own provider for SMS/Voice.

Hooked on telephony

You can send the OTP in the SMS/Voice flow using the telephony inline hook. Okta uses the code or URL in the hook to send the OTP, though, as you’ll see, the hook may not be called every time (and that’s a good thing). When your hook fails to send the message or takes too long to update the status, Okta takes over sending the message. However, the number of those messages is heavily rate-limited.

The code or URL you provide may simply send the message and communicate the outcome to Okta. The code or server may be more complex, managing geo-specific vendors, failure, failover to another provider, and hacking. No matter how easy or complex the code, there are three main approaches:

Implement the code and use your own telephony provider or providers.

Outsource the implementation and use your own telephony provider or providers.

Use a managed service that manages the process for you.

Some of the main things to consider when choosing an approach are the regions for messages, the expected traffic, the desired reliability, branding requirements, protection from hacking, and your resources.

Which regions?

Two things can identify a region. First are any regulations for sending messages. Those regulations can be set by collectives, such as the European Union, countries, or even sub-parts of a country. Second is the area covered by the telco sending the message.

Sending messages to more than one region may have at least two impacts. First, check that your desired vendor or vendors cover those regions.

Second, the features and regulations for traffic may differ from region to region. Some of the differences include:

Limitations on the types of entities that can send messages by SMS. This typically requires proof of identity and business registration.

Registration of a sender ID for your business. For example, messages without a valid sender ID are automatically marked as “Likely-SCAM” in Singapore.

Using short codes–special telephone numbers designed for high traffic. This can add significant cost.

Supported formats, such as ASCII and Unicode.

Character length limits for messages. Note that each Unicode item counts as two characters.

Check that your vendor supports the regulations in your desired regions.

How many messages?

Telephony vendors or service providers need to know the volume of messages. And not just the average volume, but any peaks, such as a time when a majority of people are trying to sign on to your network.

The service cost is the most obvious related to volume issues, but there are two others. First is the impact on the rate limits used to prevent spam texts. These limits can prevent messages from being sent, especially during peak volume; vendors may be able to increase limits.

The second impact, the reputation score, also limits the volume of messages. The lower the reputation score, the fewer messages you can send. The goal is to prevent bad actors from sending lots of spam. Newer and smaller companies start with a lower score. The score increases over time as you send messages without hitting rate limits.

Some telephony vendors or service providers can work around this limit. For example, a service provider may use their reputation or send it from a pool of phone numbers.

How reliable?

Delivering the OTP to a phone requires several steps, and any of them can fail. The more steps, the more code between the OTP and the requestor, and the more chances of failure.

Most telephony and other service providers provide a service level agreement (SLA). Availability (or uptime) is the most common measurement: the percentage of time a service can receive your request and send the message. But there are other things to consider: delivery time, knowing if it’s delivered or not, and round-trip time (total time from request to notification of outcome).

That last number is important as there’s a time limit of three seconds from Okta calling the hook to receiving a success (or failure) result. After that, the default is that Okta sends the message using its providers. However, those sends are heavily rate-limited.

From you or Okta?

Implementing the code for the hook yourself or using a consultant gives you the most control over message content. Services may offer partial or complete content customization.

You can customize the SMS messages sent by the Okta failover mechanism, though not the voice calls.

How secure?

Okta still rate-limits calls to the telephony hook to prevent spam or toll fraud. But that’s not the only security issue.

Whether you implement the hook yourself or use a service, the endpoints and calls must be protected from attacks. That includes protecting any API keys and preventing unauthorized access and use.

There are also accounts with the provider or service that must be secured.

How many people?

No matter the other concerns you identified, processes will change and update, and new things will need to be done.

New message flows and failovers require updating existing support processes for SMS/Voice users. This may include working with your chosen telephony or service vendor. You may also need to add more frequent log monitoring to detect when the failover rate limit prevents Okta from sending messages.

Vendors need management. Projects for implementing the chosen approach need planning and project management. The resources for the implementation phase vary significantly.

Implementing custom code is similar to adding a somewhat complex feature to your product: it requires product management/specification, design, engineering, testing, and project management. Outsourcing the implementation can reduce the technical resources but adds vendor management.

Moving to a service provider minimizes the technical requirements, though there’s still vendor management and monitoring.

Designing a DIY Hook

The first step in implementing a telephony hook is finding a vendor. There are at least three essential criteria:

Send messages to the desired regions

Meet reliability requirements, especially when handling failover

Allow the desired volume of messages

That last point is because some vendors limit the volume for smaller or unknown companies.

The server you write for the telephony hook uses the information received from Okta to construct a message request to your vendor. The status of the message also needs to be communicated back to Okta. Sometimes, this requires translating the data from the telephony provider into the JSON format expected by Okta.

Handling failover to Okta

Another case you must handle is a failover to Okta. Failover happens when something goes wrong with your telephony hook. Okta takes over sending the message, but the number of messages is heavily rate-limited. The only way to determine if the message was sent is by searching the logs to see when sends started failing. Your messages may never arrive.

There are two triggers for failover: your telephony hook returns a “failed” status to Okta, or a three-second timeout passes.

You can prevent failover by always returning a successful result or requesting Okta to disable failover for your organization. However, doing so means that you must handle failed message sends. That requires more complex server code and possibly multiple vendors.

Vendors

The kind of vendors you need depends on your approach. Below are a few possibilities. Some are recommendations from Okta, and others are suggestions. No matter what, make sure that the vendor meets your criteria.

Telephony providers

Here are some vendors you can use to implement the hook in-house or with a consultant.

Telesign

Twillio

Vonage

Consultants

Many consulting companies can implement the hook for you. Another option is to use Okta professional services.

Services

Some services deliver the SMS for you. That can include handling unavailable telephony vendors, resends, and other issues. Adding a service usually requires only adding a URL for the telephony hook.

Services include:

AWS Pinpoint

BeyondID

Twilio Verify

What Next?

If you rely on SMS for authentication, start thinking about how to replace it. In the meantime, use what you’ve learned in this post to keep your solutions as secure as possible.

For more content like this, follow Okta Developer on Twitter and subscribe to our YouTube channel. If you have any questions about migrating away from SMS, please comment below!


PingTalk

What is Segregation of Duties?

Read this blog to understand what Segregation of Duties is and why it’s a critical piece of identity security for today’s enterprises.

In today’s fast-paced, technology-driven business landscape, maintaining robust security and compliance protocols is paramount. One critical concept that organizations adopt to safeguard their operations is Segregation of Duties (SoD). Read on to understand what Segregation of Duties is, why Segregation of Duties is an important policy in today’s world, and how modern tools and technologies can facilitate its effective implementation.

Wednesday, 14. August 2024

HYPR

Navigate Passkey Adoption: 6 Tips on How to Go Passwordless

By now, most of us realize that passkeys and passwordless authentication beat passwords in nearly every way — they’re more secure, resist phishing and theft, and eliminate the need to remember and type in an ever-growing string of characters. Despite this, most organizations still rely on password-based authentication methods.

By now, most of us realize that passkeys and passwordless authentication beat passwords in nearly every way — they’re more secure, resist phishing and theft, and eliminate the need to remember and type in an ever-growing string of characters. Despite this, most organizations still rely on password-based authentication methods.

Transitioning to passwordless authentication offers a far more secure and user-friendly experience, but making the switch can seem daunting. In fact, the most recent Passwordless Identity Assurance survey found that nearly one third (31%) of organizations name implementing passkeys as a primary identity security challenge.

Technical integration is only one aspect. For many organizations, rolling it out to users and getting them to use it can be the thornier part.

Understanding User Adoption

User resistance to an unfamiliar technology can be a  hurdle in transitioning to passwordless. It’s critical to take a phased, change management approach, including pilot programs and early adopter groups. Clear communication about the benefits of passkey systems, referencing successful case studies, and industry best practices, helps allay user skepticism and increase acceptance.

User-centric design and understanding the psychology of habit formation are essential to achieve widespread adoption. User experience greatly impacts a passwordless initiative — balancing security and convenience is key. Consider and address your varying use cases, potential accessibility issues, and technical challenges, such as legacy systems.  

As Director of Customer Excellence at  HYPR, I’ve worked with many customers during their passwordless transition. As someone with even more years in IAM and customer experience in general, I’ve seen and heard many tech rollout tales. Here are some of the top tips to help your organization navigate passkey adoption effectively.

Six Best Practices for Passkey Adoption 1. Map Out Use Cases

Different user groups within an organization may have varying needs, both in their job function and as individuals. When it comes to passkey adoption, one size doesn’t fit all. Multiple passwordless options may be required.

Begin by getting a full picture of your current login methods. Identify priority login systems and stakeholders. Do you haveremote or hybrid employees?. What IdPs, devices, browsers and operating systems are being used? Consider non-employees like contractors, business partners, or volunteers — when and how do they log in? For users who travel extensively, note any special authentication requirements.

PRO TIP: Identify any applications, systems, or tools with additional authentication controls due to sensitive data. Evaluate specific user accounts, like IT administrators, with higher security needs.

2. Identify and Plan for Legacy Systems and Other Challenges

Passwordless deployments can be hindered by legacy systems, technical and usability concerns and under preparedness for the reliance on secondary devices. Look at the legacy applications in your tech stack, how they are used, and their current authentication methods. Will your passwordless solution integrate with them? If system updates or configurations are required for compatibility, make sure to get leadership buy-in during the planning stage.

Addressing technical and usability concerns requires capturing all unique workstream requirements and considering business-specific constraints. For example, customer-facing roles may have different constraints than back-office roles or a manufacturing floor. Users that travel may require offline authentication options.

PRO TIP: The reliance on secondary devices makes it critical to be ready with secure recovery and backup options.

3. Strategic Planning for Passwordless Authentication Rollout

Thorough planning and secure process design are critical for successful rollout. Authentication is a critical path product — ensure you’re prepared. Establish timelines, set roll-out stages, and develop communication plans. Conduct a pilot test with a small group to help identify and address potential issues before roll out.

Take the roll out in stages too, beginning with a first adopter group. This approach allows for fine-tuning the system and ensuring a smoother transition for the entire organization. Ideally, the group will include both technically-minded people as well as those less comfortable with technology. Your early adopters should represent a cross-section of use cases, especially privileged users or other groups with specific security requirements. The sequence and timing of roll out will depend on your unique environment and business, but make sure senior leadership is part of the earliest stages — a top-down approach significantly helps end-user buy-in and speed passkey adoption.

Communication during all stages is critical to both educate and preempt objections. Concerns about biometric data usage, for example, can be mitigated through educational campaigns that clarify how such data is stored and protected.

PRO TIP: Consider aligning your password policy with your improved security strategy by enforcing complex passwords in line with guidance from CISA and PCI DSS 4.0 requirements. Your users will look forward to the ease of passwordless authentication.

4. Clear Communication and Guidance

Effective communication and guidance are essential for facilitating passkey adoption. Clear, concise, and user-friendly documentation can help users understand and adapt to new authentication methods. Early adopters can provide invaluable feedback to improve documentation and identify fringe use cases and outlier scenarios.

User adoption relies on awareness of the improvements passwordless authentication offers over traditional methods. The FIDO Alliance provides some helpful communication recommendations in their Design Guidelines.

Explain that you are replacing passwords with stronger, phishing-resistant authentication. Don’t get hung up on terminology – use what works best for your users. For example, one of our customers used the term “non-shareable credentials” instead of passwordless authentication or passkeys as that resonated better with their workforce.

Provide training on new login flows, highlighting speed, ease-of-use and security. Use multiple touchpoints, such as town halls, training videos, and cheat sheets. Include guidelines for troubleshooting issues like lost devices and keep stakeholders updated throughout the transition. Importantly, solicit user feedback and be prepared to adjust communication materials if needed.

Example user communication courtesy of the FIDO Alliance

5. User Onboarding and Support

Plan for supporting your users when the new system goes live. Make sure you take into account the needs of users in different time zones or those who travel frequently. Train your help desk to educate as well as troubleshoot. Ensure that support resources are readily available to address any issues that arise. Monitor KPIs like login times, call volume and ticket metrics pre vs post-implementation.

PRO TIP: Create a promotion or contest, with prizes. Use gift incentives, swag or giveaways to first or all enrollees.

6. Choose the Right Passwordless Solution

All of the previous steps depend upon you selecting the right passwordless provider for your environment, user population and use cases.  The optimal solution removes adoption obstacles, balancing hardened security with maximized convenience and quick deployment. If you’re reading this, you’ve likely decided that a solution based on FIDO Certified passkeys is the best approach, but there are a wide range of options within this category. Assess vendor offerings based on cryptography standards, biometric and device support, scalability, customer success rate and implementation timeframe. Ease of integration with existing web/IT infrastructure is critical.

💡12 Considerations for Assessing a Passkey Solution — Download the Guide

PRO TIP: Don’t forget that secondary authentication processes and situations — registration, re-registration, lost and stolen devices — must also be protected. Look at your provider’s entire set of identity security capabilities — do they provide identity proofing technologies and other critical identity security controls?

HYPR Is Your Passkey Adoption Partner

Companies need an identity security partner with expertise in change management and a solution that provides flexibility along with the controls enterprises require. HYPR has been helping companies implement passkeys and passwordless authentication for more than a decade. This includes a top U.S. bank with the largest workforce FIDO implementation in the world.

HYPR’s leading passwordless MFA solution, HYPR Authenticate, eliminates shared credentials while providing a friction-free user experience. It offers a range of authenticator options, including our award-winning passwordless app, and works everywhere, whether in-office or remote, online or off.  

HYPR Authenticate is the foundation of our Identity Assurance Platform, which combiness phishing-resistant authentication, adaptive risk mitigation, and automated identity proofing and verification to secure the entire identity lifecycle. HYPR Integrates with your current systems, IdPs, SSOs and applications to unify authentication across the business. 

To find out how HYPR can help your organization go passwordless, painlessly, get in touch with our team.


Anonym

Here’s How Credit Unions and Banks Can Save 20,000 Staff Minutes a Month

Credit unions and banks can save a massive 20,000 minutes a month – which translates to about 4–5 staff members’ time – by implementing a single data privacy solution.  That’s the startling message our Anonyome Labs’ sale team had for popular credit union talk show host, Mike Lawson, at the credit union advocacy conference, GAC […] The post Here’s How Credit Unions and Banks Can Save 20,000

Credit unions and banks can save a massive 20,000 minutes a month – which translates to about 4–5 staff members’ time – by implementing a single data privacy solution. 

That’s the startling message our Anonyome Labs’ sale team had for popular credit union talk show host, Mike Lawson, at the credit union advocacy conference, GAC 2024, in Washington D.C. earlier this year. 

Mike’s CU Broadcast welcomed us on to discuss Anonyome Labs’ revolutionary identity verification solution, reusable credentials, and why it’s now so important that financial institutions embrace this highly innovative technology to keep member data safe and savvy consumers happy. 

Listen to the CU Broadcast episode 

We had 6 a-ha moments for Mike during the CU Broadcast episode  There’s a gap in the Know Your Customer (KYC) process that we can fix right now. Current onboarding and KYC requirements demand loads of personal information from new customers, which takes a long time to process and is at risk of data breach. What’s more, many credit unions are still manually processing onboarding data, which causes friction, turns off time-poor and tech-savvy consumers, and is open to fraud. Some key problems here are that 80% of us don’t know where our local branch is anymore; every transaction with a credit union requires the member to hand over different pieces of their personal information (e.g. mother’s maiden name, drive license etc.); data breaches are rampant; and consumers are increasingly questioning why companies need so much of their personal information to access services.  
  Anonyome Labs’ market-leading reusable credentials can solve these problems effortlessly. The new technology replaces disparate, traditional processes with a single cryptographically protected digital ID that is persistent, irrefutable, and customer-controlled. With a reusable credential, the customer only has to verify themself once, and the same credential ecosystem creates proof of identity for any of their interactions with the credit union. Reusable credentials leverage groundbreaking decentralized identity and blockchain technology to secure the information and, conveniently, the customer stores their credential on their mobile device. Nothing could be simpler.  
  Reusable credentials save financial institutions about 20,000 minutes a month in staff time. This is a big one! Instead of all that double handling of data and slow manual processes, a reusable credential streamlines the member’s experience in today’s fast-paced and data-driven environment, potentially shaving a minute-and-a-half off each member’s verification time and saving around 20,000 staff minutes a month. The time savings add up to the equivalent of about four or five, even six, staff members that a credit union wouldn’t have to hire, which is significant, especially now when staffing is a pain point. 
  It’s crucial that credit unions realize the power they have in member data. As host Mike Lawson pointed out in the episode, “Credit unions have more data on their members than Amazon has on their customers!” He also noted that credit unions list data protection as one of their top 3 concerns. We agree, which is why Anonyome Labs’ solution can be very beneficial for the credit union because we can reduce fraud and optimize onboarding time. It’s a cost saving, it’s fraud prevention, and it’s safer. It’s an absolute win-win across the board.  
  Most credit unions want to make an impact or change, but they don’t know where to start. We say: Start with onboarding! Customers are now in a world of instant gratification. A clunky onboarding experience will lose you customers. Think about college students opening accounts for the first time. They’re a key audience, onboarding is their first impression of the financial institution, and they have little or zero tolerance for friction. Reusable credentials are the answer. And once you have improved your verification processes, we recommend looking next at optimizing loan processes.  
  This technology might sound new to banking, but Anonyome Labs has been pioneering in the area for 10 years. Decentralized identity (the technology underpinning reusable credentials) sounds complex, but it’s really just about giving the customer control of their information so they get to decide what they share and with whom. We have about 20 patents around this technology. In the ep., Mike wrapped things up by observing: “Anonyome Labs is striking the balance between security and convenience.” We agree! 

Thanks Mike Lawson for having us on your couch during the recent GAC 2024—the biggest credit union advocacy event of the year, hosted by America’s Credit Unions! Our sales team enjoyed walking the floor and meeting folks at this important industry event.   

Anonyome Labs is the leader in proactive identity protection technologies. From verifiable credentials to VPNs and encrypted communications, we leverage our cryptography and blockchain technology expertise to take data privacy and security to the next level. Check out our podcast, Privacy Files, to hear what your peers and experts are saying about the state of member and consumer privacy in real time. 

If you’d like to get started with reusable credentials or other privacy and security solutions, get in touch today! 

The post Here’s How Credit Unions and Banks Can Save 20,000 Staff Minutes a Month appeared first on Anonyome Labs.


Trinsic Podcast: Future of ID

Kim Hamilton Duffy - From Learning Machine to DIF and the Evolution of Decentralized Identity

In this episode, I talk with Kim Hamilton Duffy, the Executive Director of the Decentralized Identity Foundation (DIF). Before her work at DIF, Kim served as the CTO at Learning Machine, an early pioneer in the self-sovereign identity space that was acquired in 2020. We cover a range of topics, including: - The early days at Learning Machine and how they acquired their first customers - The mess

In this episode, I talk with Kim Hamilton Duffy, the Executive Director of the Decentralized Identity Foundation (DIF). Before her work at DIF, Kim served as the CTO at Learning Machine, an early pioneer in the self-sovereign identity space that was acquired in 2020.

We cover a range of topics, including:

- The early days at Learning Machine and how they acquired their first customers
- The messaging strategies that resonated and the unexpected moves that set them apart, like making it easy for customers to leave
- How adoption exceeded expectations at Learning Machine and how that compares to the current decentralized identity landscape

Kim offers deep insights from her extensive experience in the digital identity ecosystem, making this a conversation you won't want to miss!

You can learn more about DIF on their website: identity.foundation.

Subscribe to our weekly newsletter for more announcements related to the future of identity at trinsic.id/podcast

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.


KuppingerCole

Sep 11, 2024: A Glimpse into the 2024 IGA Market Landscape

The IGA market continues to grow, and although at a mature technical stage, it continues to evolve in the areas of intelligence and automation. Today, there still are some organizations either looking at replacements of UAP and ILM or IAG, but most are opting for a comprehensive IGA solution that simplifies deployment and operations and to tackle risks originating from inefficient access governance
The IGA market continues to grow, and although at a mature technical stage, it continues to evolve in the areas of intelligence and automation. Today, there still are some organizations either looking at replacements of UAP and ILM or IAG, but most are opting for a comprehensive IGA solution that simplifies deployment and operations and to tackle risks originating from inefficient access governance features. The level of identity and access intelligence has become a key differentiator between IGA product solutions. Automation is still the key trend in IGA to reduce management workload by automating tasks, providing recommendations, and improving operational efficiency.

Tuesday, 13. August 2024

KuppingerCole

The State of the CIAM Market

The CIAM market continues to grow and change. There have been major acquisitions in this space, and new vendors are launching products and services. Security is always a driver, but deploying organizations want useful data to improve marketing effectiveness and increase revenues. New privacy regulations put more requirements for information collection and handling on customer organizations. CIAM s

The CIAM market continues to grow and change. There have been major acquisitions in this space, and new vendors are launching products and services. Security is always a driver, but deploying organizations want useful data to improve marketing effectiveness and increase revenues. New privacy regulations put more requirements for information collection and handling on customer organizations. CIAM systems must also be able to integrate with other IT, security, and enterprise IAM solutions. To capture market share, CIAM vendors have to be innovative. Fraud prevention and integrations with marketing tools are differentiators that many companies are looking for in CIAM.

John Tolbert, Director of Cybersecurity Research at KuppingerCole, has been covering the CIAM market for nearly a decade. In this webinar, he'll discuss the business requirements commonly submitted for CIAM RFPs, the current state-of-the-art in CIAM, and the innovative features that leading edge solutions offer. He will describe our Leadership Compass methodology and process, and show some high-level results from the report which just published this summer.




Indicio

Why you should vote for the Digital Farm Wallet in the SuperNova Awards

The post Why you should vote for the Digital Farm Wallet in the SuperNova Awards appeared first on Indicio.
Trust Alliance New Zealand (TANZ) is a finalist in the SuperNova Awards for its transformative use of decentralized identity in agriculture. TANZ, co funded by the Ministry for Primary Industries New Zealand Government, created a pilot decentralized ecosystem for farmers to understand how to share trusted data on regulatory compliance around emissions and environmental sustainability — and its so successful, it’s being scaled to cover the entire agricultural sector. Here’s why the project deserves your vote.

By James Schulte

The project

Farming is a data-intensive business from animal welfare and food safety to greenhouse gas emissions and soil health, all of which is tied to market access, regulatory compliance, and consumer confidence.

In line with the government of New Zealand’s introduction of a Digital Identity Services Trust Framework, Trust Alliance New Zealand (TANZ), a non-profit, member-driven, farming industry consortium, conceived and built a pilot digital farm wallet and decentralized ecosystem (with partners Indicio and Anonyme labs) for the country’s primary sector.

The Digital Farm Wallet pilot project was launched in January 2023 to provide farmers and other relevant parties with a secure, permissioned way to capture and share data while preserving privacy. The focus, initially, was on regulatory compliance and simplifying that burden.

But a key goal was to create the digital infrastructure to transform “brand” promises — origin, welfare, environmental compliance — into transparent proofs that consumers can trust, which is vital for New Zealand’s export market, and a transformative use decentralized identity technology for global agriculture.

The result

The initial project quickly expanded to go beyond TANZ members to include farming organizations, regulators, and banks and had over 200 active participants. Each wallet had four to six credentials farmers could create and use, including farm ID, greenhouse gas emissions, nitrogen emissions, and geospatial farm boundaries, which  could then be submitted to relevant stakeholders, for example, regional councils, banks and/or processors.

The challenge was to bring so many competing stakeholders together to collaborate around and adopt a new and unfamiliar technology. This was overcome by education and, more importantly, by the nature of the technology itself and the benefits it quickly delivered.

Specifically, farmers were able to realize tangible savings in time and money by simplifying and streamlining compliance requirements. Paper documentation that often had to be submitted up to seven times was transformed into a simple verifiable credential presentation.

Second, the technology gave farmers control over this data and how they shared it. This gave them confidence that the technology wasn’t another app where their data was aggregated and managed by a third party. This was vital to fostering collaboration with competitors in the ecosystem.

“The project vastly exceeded expectations,” said Sharon Lyon, Project Manager at TANZ. “We set out to build a pilot digital wallet to cater to the farmers, and ended up creating a verifiable credential ecosystem focused on the relying parties. We realized that the value to farmers in the project comes from the parties that need the farm data, and the farmer being able to give the data in a trusted and permissioned way.

“Once the credentials were available, relying parties were onboarded into the pilot.. Being able to quickly share data about their goods or emissions to these key relying parties provided a huge benefit to the farmers, saving them time, creating better connections between them and their customers, and reducing the amount of effort they have to spend filling out the same forms multiple times. So building a decentralized ecosystem for the sharing of digital proof points, or credentials, and not just a digital wallet, soon became our focus.”

The Digital Farm Wallet is in the process of being scaled to the entire New Zealand agricultural sector and with expanded functionality.

If you would like to learn more about the project you can watch a recent discussion Indicio hosted on the Digital Farm Wallet here.

How to vote

Voting is live from August 5 to August 30 on the Constellation website and should take less than a minute. Please consider taking a moment to recognize all of the farmer’s lives that TANZ has improved, and the groundbreaking work put into this project.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Why you should vote for the Digital Farm Wallet in the SuperNova Awards appeared first on Indicio.


Elliptic

Ensuring sanctions compliance for stablecoins with Ecosystem Monitoring

Complying with rules and regulations around financial and economic sanctions is one of the most challenging issues facing the cryptoasset space. 

Complying with rules and regulations around financial and economic sanctions is one of the most challenging issues facing the cryptoasset space. 


KuppingerCole

ARCON drut. Robotics GRC and Process Automation Platform

by Warwick Ashford This KuppingerCole Executive View report looks at the challenges of achieving effective governance, risk, and compliance (GRC) in the increasingly complex and dynamic digital environment. It examines the benefits of automation and includes a technical review of ARCON’s drut. robotics-based GRC and process automation platform.

by Warwick Ashford

This KuppingerCole Executive View report looks at the challenges of achieving effective governance, risk, and compliance (GRC) in the increasingly complex and dynamic digital environment. It examines the benefits of automation and includes a technical review of ARCON’s drut. robotics-based GRC and process automation platform.

Finema

This Month in Digital Identity — August Edition

This Month in Digital Identity — August Edition Welcome to the August edition of our monthly digital identity segment! This month, we’re diving deep into pivotal advancements and strategies that are shaping the future of digital identity. Here’s an in-depth look at the key topics we’re covering: Enhancing Digital Identity Adoption 🌍 Our first article focuses on the EBSI-CAN project me
This Month in Digital Identity — August Edition

Welcome to the August edition of our monthly digital identity segment! This month, we’re diving deep into pivotal advancements and strategies that are shaping the future of digital identity. Here’s an in-depth look at the key topics we’re covering:

Enhancing Digital Identity Adoption

🌍 Our first article focuses on the EBSI-CAN project meeting, a landmark event in advancing digital identity adoption and cross-border interoperability between the EU and Canada. This meeting was crucial in addressing the complex challenges faced by international digital identity systems. It explored various technical barriers that currently impede seamless integration of digital identity systems across borders, such as differing standards and protocols. Regulatory alignment was another key focus, with discussions centered on harmonizing regulations to facilitate smoother interactions and exchanges of digital identity information between regions. Collaborative frameworks were also highlighted as essential for fostering international partnerships and creating a unified approach to digital identity. By tackling these issues, the EBSI-CAN project aims to build a more cohesive and efficient digital identity ecosystem that supports global digital transactions and interactions. This initiative represents a significant step toward overcoming the fragmentation in digital identity systems and achieving a more integrated global digital landscape.

Advancing Decentralized Identity

🔒 Our second feature delves into the exciting progress being made in the decentralized identity sphere, particularly the integration of OpenID’s verifiable credential protocols with DIDComm. This development marks a significant leap forward in enhancing digital identity management. OpenID’s verifiable credentials provide a robust framework for issuing and verifying digital identity information, while DIDComm enables secure, direct communication between trusted parties. The integration of these technologies facilitates a more secure and efficient exchange of identity information, supporting self-sovereign identity systems where users have greater control over their personal data. This advancement not only improves the reliability of digital identity exchanges but also enhances privacy by ensuring that personal information is only shared with trusted entities under secure conditions. The combination of OpenID and DIDComm represents a major stride toward a more user-centric and resilient digital identity infrastructure, paving the way for more secure and flexible identity management solutions.

Balancing Privacy, Security, and Convenience

🔐 In our third article, we explore the ongoing evolution of digital identity with a focus on balancing privacy, security, and convenience. As digital identity systems become more advanced, decentralized solutions are emerging as a promising way to enhance user control over personal data. These systems offer significant advantages over traditional centralized models by providing greater transparency and control to users. Our article examines how these decentralized systems address common concerns related to privacy and security while still delivering a high level of convenience. It discusses the technological innovations that are reshaping personal data management, including new methods for protecting user data and ensuring secure interactions with digital services. By exploring these advancements, the article provides insights into how future digital identity solutions might evolve to meet both user expectations and regulatory requirements, ultimately leading to a more balanced and user-friendly digital identity landscape.

The Strategic Advantage of Open Working Practices

💼 Our final feature in this edition discusses the strategic benefits of adopting open working practices. Open working practices, characterized by transparency, inclusivity, adaptability, collaboration, and community, offer organizations a powerful approach to enhancing their operations. The article explores how these principles can lead to greater organizational agility by breaking down traditional barriers and fostering a culture of open communication and collective problem-solving. It highlights how open working practices can drive innovation by encouraging diverse perspectives and ideas, leading to more creative and effective solutions. Additionally, the article examines how these practices can improve employee engagement and satisfaction by creating a more inclusive and supportive work environment. By embracing open working principles, organizations can achieve sustainable success and strengthen their performance in a rapidly changing business landscape.

We look forward to bringing you more insightful updates as we continue to explore the latest trends and innovations in the field of digital identity. Stay tuned for future editions of our monthly segment!

This Month in Digital Identity — August Edition was originally published in Finema on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 12. August 2024

IDnow

Beyond the regulatory tick box: Exploring the benefits of KYC.

New IDnow ebook unpacks the importance of KYC and how it can be used as a competitive differentiator. In today’s online world, it’s hard to know who to trust. Digitalization and globalization have resulted in significant business challenges, such as increasing risks of fraud and identity theft, especially in the banking sector. Verifying prospective customers […]
New IDnow ebook unpacks the importance of KYC and how it can be used as a competitive differentiator.

In today’s online world, it’s hard to know who to trust. Digitalization and globalization have resulted in significant business challenges, such as increasing risks of fraud and identity theft, especially in the banking sector. Verifying prospective customers before they become users has therefore never been more important. 

For this reason, the Know Your Customer (KYC) process has become an integral step in securing financial transactions. Although a common compliance ‘tick-box’ requirement, some KYC processes can be overly complicated and lack transparency, which can lead to customer abandonment during onboarding. To set up a KYC customer journey that works for the business and the customer, it’s important to understand the role of KYC, how it works and how it can be used as a competitive differentiator. 

Click below to check out our latest ebook, ‘Building trust through KYC in banking’.

Building trust through KYC in banking. How can you set up a KYC process that satisfies your customers and meets regulatory requirements? Download now to discover: What is KYC? The importance of KYC in the banking sector Regulatory impact on KYC processes Read now The importance of KYC in banking.

The KYC process is crucial in all situations where customers are involved in financial activities. Verifying a new customer’s identity and assessing potential risks helps banks establish trust in a customer profile, allows the bank to understand the nature of customer activities and provides protection from losses and fraud. Money laundering, in particular, remains a global problem that requires rigorous measures to combat effectively. 

According to the United Nations, money laundering accounts for 2-5% of global GDP (about US$800 billion to US$2 trillion) and banks have a major role in protecting against it. Criminal activity in this sector can affect the financial institution involved, customers, and wider markets and economies. Identity fraud can also cause serious financial harm. For example, in the United States, $16.1 billion in losses was attributed to identity theft in 2021. 

The days of visiting a bank to inquire about services or make a transaction are quickly coming to an end. Customers are now unwilling to visit brick-and-mortar bank branches, even if they could. In the UK, almost three-fifths of its bank network have closed since 2015. Numbers that are reflected elsewhere in the world.

UK: 86% of adults use online banking or remote banking.  Germany: 84% use online or mobile banking to carry out essential bank transactions.  France: 96% of people actively use their online banking services.

“KYC needn’t be seen as a tick-box exercise that must be performed. The banking sector should see KYC as a valuable competitive differentiator; to not only reassure new customers that you take their business seriously, but existing customers that your bank is a safe and secure place to transact,” said Rayissa Armata, Director of Global Regulatory Affairs at IDnow.

Offering a safe and secure KYC process doesn’t mean it needs to be slow and cumbersome, it can be intuitive and be customized according to customer preference. In 2024 and beyond, as industries undergo their digital transformation, KYC will continue to become even more important.

Rayissa Armata, Director of Global Regulatory Affairs at IDnow.
The regulatory impact on KYC processes.

KYC processes have evolved significantly over the last decade thanks largely to a dynamic regulatory framework. These developments were mainly initiated at the European level and then transposed to the national level. AML laws have also gradually imposed standards applicable to KYC. There are six versions of the Anti-Money Laundering Directive, each of which was developed and released in response to political and societal issues surrounding money laundering and the latest fraud techniques.

While the banking sector and insurance companies are the main industries that are required to perform KYC, so-called non-financial companies are also included. For example, gambling platforms, real estate agents, art dealers, cryptocurrency platforms and sellers of luxury jewelry and precious metals.

Although not specifically designed for KYC, it is also important to consider the General Data Protection Regulation (GDPR) restrictions and requirements. This directly influences the way customer data is managed and requires companies to ensure that personal information collected for KYC is handled in accordance with the principles of privacy and data security.

Companies that do not comply with a country’s KYC obligations not only risk reputational harm and the potential loss of licenses but are also subject to heavy penalties from their national supervisory authority.

The importance of customer engagement, experience and expectations.

While it is mandatory to comply with regulations, the customer experience should never be taken for granted. The old saying “the customer is king” remains true, especially when banks move services online.  

Organizations with effective customer experience see an increase of 92% in customer loyalty. In this regard, customers can be a major driving force behind a bank’s success or failure. There are various things to consider when designing the ideal KYC customer journey. 

Firstly, are customer expectations. Proficiency and experience with different identification methods varies by country. French residents are more accustomed to online identification than Italian residents, for example. It is also important to consider customer service availability. In some southern European countries, users may be active at night and want to sign up and transact at that time, while further north, it’s more likely to be earlier in the day. This is why it is necessary to provide a real-time 24/7 service, accessible at any time and from any location.  

Customer engagement is also very important. As the public are already used to fast and frictionless buying processes and hyper-personalized interactions in other areas of their digital daily life, they expect the same from their bank. 

Users expect to be able to sign up for a product or service quickly, and delays in this area may make them lose interest. As there is now a wealth of choice for customers and bank loyalty may no longer be rewarded, the onboarding and verification process is a vital opportunity that can lead to higher conversion rates.

The benefits of KYC in banking.

The main goal of the KYC process is to prevent criminal activity. This helps to protect the bank, its customers and the wider financial markets from fraud and other financial crime. This goes some way to explaining why regulations are so strict.  

However, there are other reasons to invest and comply with KYC, including: 

Cost efficient: Good KYC processes can help businesses increase their conversion rates and reduce the costs of manual processing.  Improve the customer experience: When properly implemented, the KYC process helps avoid friction between the company and the user by granting instant access after verification.  Build trust in the organization: While the checks and requirements can be onerous, customers want to see that their bank is taking the issue seriously. Compliance establishes credibility.  Meet legal requirements: As complying with regulations is a legal responsibility, non-compliance can result in hefty fines and lawsuits. Apart from the monetary impact, non-compliance can also damage the company’s reputation. How IDnow helps banks comply with KYC.

From ID document scans to full identity checks, IDnow offers a range of automated services, including AutoIdent and IDCheck.io, to ensure a smooth and instant customer experience. User onboarding is automated, fraud is detected, and services are fully compliant with KYC and AML/CFT standards. 

Document capture: Our web or mobile SDK enables high-quality dynamic scanning for ID document capture, while providing excellent user experience.  Biometric capture: Our biometric tools enable users to take a selfie or facial recognition video to verify the identity of document holders.  Automated and/or manual data verification: Our fully automated document verification API extracts and verifies data in less than 12 seconds. In addition, a team of fraud experts can check documents manually.

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn


Verida

Verida Community Update — Verida.ai Launch and Network Explorer

📢 Verida Community Update — Verida.ai Launch and Network Explorer Hello, gm everyone! Chris Were here, the CEO and Co-founder at Verida 👋 This latest update covers new developments and releases from the past two weeks and a sneak preview of the upcoming Private Data Bridge developer tools. Transcript: Welcome to my latest community update for the Verida Network. Thank you for tuning
📢 Verida Community Update — Verida.ai Launch and Network Explorer Hello, gm everyone!

Chris Were here, the CEO and Co-founder at Verida 👋

This latest update covers new developments and releases from the past two weeks and a sneak preview of the upcoming Private Data Bridge developer tools.

Transcript:

Welcome to my latest community update for the Verida Network. Thank you for tuning in. It’s the 12th of August 2024, and there’s a little bit to cover today from what we’ve been working on and what we’ve released over the last two weeks.

Verida Network Explorer

So we’ll start with the Verida Network Explorer. This was recently announced. We’ve got a new refresh here, with a new layout.

We now have some nice graphs of the identities that have been created on the network, and you can now actually browse and navigate through the different DIDs and identities that were created on the network. We had a really interesting design decision. Obviously, we’re a privacy-first network, and we support private, encrypted data, but we also support public profiles and public data, so people can optionally make information public. So we had a bit of a question here: do we show that public information here on the Network Explorer, or even though it is public, should we not show it and not make it as accessible? Even though the information is public on the network, we could make it a little bit more hidden because we do make it clear that information is public when you create an account in the Verida Wallet. We did decide to make that visible. I’m interested in people’s feedback and thoughts on that. Feel free to post in the comments or reply to this thread if you’ve got a different take. Privacy is a really interesting problem, and there are pros and cons for both approaches, but we do have the Network Explorer release now. You can actually click through and have a look at the nodes that are on the network.

This is currently the mainnet Explorer, which is explorer.verida.network. We also have a testnet Explorer, which you can find a link to in our official announcement. And we will continue to expand the number of nodes that are available and expand the information that’s actually available about these nodes as we continue to expand the interfaces and the tools that we have around the Verida Network.

Verida.ai Launches!

Now, the big news that we have is we announced the Verida AI landing page and website, which is super exciting. As you’ve probably been following, we believe that the ability to own all of your data and then connect that to AI is a really powerful use case, and it’s going to be something that, in the coming months and years, we’re all going to expect: the ability to have AI that knows everything about us. But it’s super important that if we have these types of tools, they are privacy-preserving, and it’s only me that has access to this information — it’s not the big tech companies or other third parties. It’s just you with your private key, and you’re the only one that can talk to a private AI agent. So this landing page is really the start of that journey at Verida.

We are working with an ecosystem of partners to build out different assistants for different purposes. We’re actually building a showcase, an example assistant using your data. That is just a good starting point that developers can fork and use to build their own assistants. But we have partners that are building really advanced and interesting products that we’re going to connect into and allow your user data that you connect into Verida to connect into these other projects. And that’s a really important part of what we’re doing because this personal AI, this private AI space, is really emerging. There’s a lot of R&D that needs to happen. There are lots of different ways of tackling these problems. We want to partner with the best teams in the world that are tackling those problems and really help provide the infrastructure for your data to connect to those assistants, and also help provide the private database, storage, and private computation that’s needed to protect your data when it’s running with these different types of agents. So if you’re interested in this space, I really encourage you to visit verida.ai, click on the “Become an Early Adopter,” put your email address in, and follow our newsletter. We’d love to get more insight and feedback from you. So please fill out the form that you receive once you subscribe. Not only will you get early access to some of the projects that we’ve partnered with and some of the AI assistants that we’re building to showcase, but you’ll also be able to keep up to date with the latest news, specifically about private AI, AI that’s built using your data, and what’s happening in that space.

If you haven’t already, check it out. We’ve got a mockup of what this is actually going to look like: the ability to talk to an AI, different types of assistants for different purposes, connecting different types of data to the AI. Obviously, a chat interface is what we’re used to when you’re talking to large language models. We do have projects we’re talking to that have more animated avatars that you talk to. So while this is just the start of an interface, we actually expect Verida data to connect to lots of different types of AI products and services that have different ways of interacting with them. That’s super exciting, and hopefully, we can share more about some of those upcoming partnerships in due course. As we touched on, the ability to connect your data is super important, and this is really where Verida has focused a lot of our effort. We’ve been building out the Verida Private Data Bridge, which is the underlying infrastructure that allows the ability to connect all these different connections and allows you to bring your data into the Verida Network and then connect your data in a secure way to these different AI agents and tools. So if you’re a builder and you’re interested in building in this space, maybe you’re interested in building an AI agent that’s using data from users, please come and get in contact with us. Connect with us on Twitter or Discord. We can hopefully give you some early access to some APIs and some tools to allow you to start building sooner rather than later.

Verida Private Data Bridge

We did announce the Private Data Bridge. This screenshot shows an interface to the Private Data Bridge. We’ve been doing a lot of market research, talking to a lot of C-level executives, partners, and other interested parties. And it’s become very clear that the ability to have an AI agent that has access to your personal data is valuable, but equally as powerful is if it has access to all of your business information — your knowledge bases, your work email, your Slack, your Telegram, particularly if you’re in crypto, access to your Google Drive. And so part of what we’re doing is actually changing our language a little bit. So moving forward, instead of referring to it as the Personal Data Bridge, we’re actually going to start referring to it as the Private Data Bridge because that’s more encompassing of both personal data and business or organizational type data. In terms of technology, we’re not changing much; we still enable the same capabilities. So Private Data Bridge makes a lot more sense.

As you know, we are building in the open. So here’s a little sneak preview of the developer interface for the Private Data Bridge. This is not what end users will use, but this is what developers can use to talk to user data. As a developer, you’ll be able to connect your own data, and you can see the different connections that you’ve made. Obviously, we support the ability here to connect to Google accounts, and from this interface, you can easily sync your data or disconnect that source. If you click “Show Logs,” it actually opens up a little modal window, and it shows all of the current activity that’s happening when data is synchronizing, so you can get an insight into what’s happening. This is super helpful as a developer if you’re building a new connection. So we support a number of different connections. We’ll support more than this at launch, but these are the ones that we’ve been working on so far. So as a developer, you’ll be able to come in here, write some code to create a new connection, and use this interface to interact with your connector.

As a developer, we’ll also add to this API documentation some API tools so you can easily write apps or AI agents that use user data. And as an example of that, we’ve got a very basic interface here where you can search the data that you have as a user and browse that in a very simple way. You can filter it and sort it.

You can look at different types of data. So again, as a developer, you can actually look under the hood, and you can see all the synchronization logs of connections. You can actually look at, you know, if you connect your Gmail, you can actually look at the raw emails that have been imported and synchronized, or social media posts and things like that. So this is just a basic interface, but this is what we’ve built so far for developers to help them integrate. And if you start building with the Verida AI technology stack, you’ll have access to all of these types of tools.

This is a work in progress, obviously, but the key thing here is that we have a window into your own data, and we’re using this for testing purposes and making this available to developers in the coming weeks, which is really exciting. So yeah, there’s a preview of what we’re doing with the Private Data Bridge and Private Data Connections.

And we really look forward to getting everyone that wants to build access and letting them have a play.

Reach out if you’re an AI builder

So yeah, that’s it for me in this fortnightly update. There’s a lot happening, as you can tell, behind the scenes on both the Private Data Bridge development and also the Verida AI tooling and showcase. So really looking forward to being able to present some really exciting things for you in the next update in a couple of weeks. In the meantime, keep your eye out. The light paper will be released very shortly, and we have a few other partnership announcements coming up as well. So thanks for tuning in. And as I said, if you’re interested in building in this space, building AI agents using user data, please reach out to us. We’d love to support you and get you early access.

📢 Verida Community Update — Verida.ai Launch and Network Explorer was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


Indicio

Digital Travel Credentials (DTC) are leading the digital identity revolution

The post Digital Travel Credentials (DTC) are leading the digital identity revolution appeared first on Indicio.

By Trevor Butterworth

Analyst Alan Goode recently noted that “the travel industry is at the vanguard of digital identity adoption globally.”  

As a company leading the vanguard, with partner SITA, we agree. But there’s a lot to unpack here for consumers and other business sectors.

To legally cross  a border, you must have a passport; therefore, it stands to reason that crossing a border “digitally” requires a digital identity as trustworthy (or even more trustworthy) than a physical passport. We explored the idea of “government-grade” digital identity in a previous blog and how Digital Travel Credentials following standards set by the International Civil Aviation Organization (ICAO) achieved this grade by using decentralized identity technology. 

This technology changes the fundamental way we identify ourselves digitally and online, and the way we share and authenticate information.

They allow us to hold our own data in a highly protected way  They make this held data cryptographically verifiable so that it is portable and trustworthy. This eliminates the need for “identity accounts” that require logins and passwords which are at risk of being phished or faked (for example, frequent flyer programs). This in turn eliminates the need for identity accounts with personal data to be stored by third parties for verification (a security risk because the data is stored in centralized databases that are difficult to protect against data breaches) This also means that a person can hold their own biometric data, bind it to their digital identity, and have it cryptographically verifiable in a way that obviates the risk of AI deepfakes.

Centralized databases accessed through user accounts are a fundamentally weak way to manage identity and authentication to access resources. This is because they are susceptible to a single point of failure. 

Here’s a hypothetical: Imagine a business database with a million customer accounts and their account details. Imagine all but one customer — 999,999 — are hypervigilant about regularly changing passwords and clicking on suspicious SMS messages or emails. And then that one person clicks, in error, on a phishing email and inputs their account login and password. That phishing attack nets the personal details of all 999,999 user accounts.

That is the essence of data breaches and identity theft: It’s an all you can eat buffet costing billions of dollars in both losses and security. Current solutions treat the symptoms rather than the disease: multifactor authentication, passwordless, single sign on; all add complexity, expense, and friction to what is meant to be an instant process without removing the underlying problem.

And we haven’t even talked about complying with data privacy regulation.

What the travel sector has quickly realized is that decentralized identity solves all these critical identity and access management problems: Let the customer hold their data and let the portable trust created by decentralized identity do all the work. 

With government-grade verifiable identity credentials, travel can be seamless because we can authenticate this information when it is presented by customers. We don’t need to store and manage it. 

Tackling the biometric threat
Perhaps one of the most important and least commented on aspects to digital travel is that decentralized identity saves biometric systems from catastrophic risk.

Biometrics were the answer to passwords: Instead of the farce of coming up with new, complicated phrases every few months to manage your account login, use your face. Or voice. Or fingerprint. 

These became the seamless answer to password theft — until generative AI technology suddenly made biometrics easy to fake. And while you can reset a password, you can’t reset a person’s physiological characteristics. Once a person’s biometrics are stolen, how are they supposed to get them back? 

This is where verifiable credentials and decentralized identity come to the rescue. There are multiple ways to bind liveness and biometric information to an identity check such that you can be sure that I am who I claim to be. And because this biometric information can be verified cryptographically, it can be held by the traveler instead of being stored in an airline database, where it turns into a permanent privacy and security liability. 

Verifiable credentials save biometric systems.

First-class data sharing
|This is why what’s happening in travel with digital identity is showing the world the future. We have taken the toughest use case — crossing a border digitally — and solved it to the satisfaction of governments, airlines, airports, AND travelers.

The combination of people holding their own data, deciding who they want to share it with, and this data being cryptographically verifiable rewrites the entire digital landscape. With portable trust, information can go anywhere.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Digital Travel Credentials (DTC) are leading the digital identity revolution appeared first on Indicio.


KuppingerCole

Sep 24, 2024: Navigating Data Challenges: Unlocking Power of Data Marketplaces

Modern enterprises face numerous data-related challenges, including siloed storage, security threats, and compliance requirements, making strategic and efficient data management essential. Navigating complex data landscapes requires ensuring data accessibility and security, while preventing unauthorized access and breaches. Robust data management strategies are key to maintaining competitive advant
Modern enterprises face numerous data-related challenges, including siloed storage, security threats, and compliance requirements, making strategic and efficient data management essential. Navigating complex data landscapes requires ensuring data accessibility and security, while preventing unauthorized access and breaches. Robust data management strategies are key to maintaining competitive advantage and operational efficiency in today's fast-paced business environment. Data marketplaces – platforms that connect data producers of specific data products with data consumers who can leverage them for their own goals and projects – are an emerging technology that can power such strategies. Join experts from KuppingerCole Analysts and Immuta as they discuss how data marketplaces address challenges in data management. They will explain how this approach can enhance data access control and internal sharing, provide a centralized platform for managing data assets, help break down silos, ensure compliance, streamline governance, improve security, and foster innovation, driving business success in a data-driven world. Alexei Balaganski, Lead Analyst at KuppingerCole Analysts, will provide an overview of the risks and challenges in managing sensitive data at the enterprise level amidst the evolving compliance landscape. He will discuss how to balance security with accessibility and productivity, offering insights on reducing data friction while meeting regulatory requirements. Bart Koek, Field CTO at Immuta, will discuss strategies for promoting efficient and compliant data sharing, present practical use cases, explore best practices from real-world implementations of data marketplaces at leading organizations, and provide an overview of Immuta’s Data Security Platform.

Sunday, 11. August 2024

KuppingerCole

Identity Security - the Epicenter of Cybersecurity

In this episode of the KuppingerCole Analyst Chat, host Matthias Reinwarth is joined by Martin Kuppinger, Principal Analyst at KuppingerCole Analysts, to discuss the evolving landscape of identity security. They explore the centrality of Identity and Access Management (IAM) in IT security, the rise of Identity Threat Detection and Response (ITDR), and the latest trends in fraud prevention. The con

In this episode of the KuppingerCole Analyst Chat, host Matthias Reinwarth is joined by Martin Kuppinger, Principal Analyst at KuppingerCole Analysts, to discuss the evolving landscape of identity security. They explore the centrality of Identity and Access Management (IAM) in IT security, the rise of Identity Threat Detection and Response (ITDR), and the latest trends in fraud prevention. The conversation delves into the use of generative AI in cyber-attacks, the importance of gamification in cybersecurity, and the anticipated advancements in ITDR solutions. Join us to gain insights into these critical areas shaping the future of cybersecurity.




Spherical Cow Consulting

IAM’s Time Problem: Why Digital Attestation Needs Work

Identity management and digital attestation are crucial for verification and authenticity. The process involves proving the integrity of data through cryptographic techniques, and it has parallels to non-digital methods like notary services. The use of electronic ledgers, cryptography, and key management are essential in ensuring secure digital attestation. However, there are challenges related to

Identity management has a time problem. Discussions in the hallways and conference calls for various identity and security standards focus on immediate, point-in-time requirements. Can this person or thing authenticate itself at the moment they need to? Are they authorized at that moment to access the system, service, or data they need to get their job done? Don’t get me wrong; those are big, important questions that need to be addressed. But sometimes, you need more. You need to dig into the past to determine responsibility for specific actions or the provenance of digital material. This area is called attestation and verification and is at least as complicated as proper authentication and authorization, especially when considered over longer time frames.

Understanding Modern Digital Attestation

Modern digital attestation is the process of proving or verifying the authenticity and integrity of a system, device, or data, often through the use of cryptographic techniques. Has the data been tampered with? Can it be trusted to be what it is supposed to be? Or have conditions changed such that you cannot immediately trust what you have?

The process of attestation has, of course, been around longer than computers. If you’ve used a notary service, you’ve encountered a non-digital attestation process. The notary verifies the identity of the person via official identity documents (such as a passport or driver’s license). They then witness the signature and provide a stamp that attests the signature is genuine, and the person has been identified, and then they record the whole transaction in a ledger.

In principle, digital attestation is much the same: an identity is verified, a credential is issued (or an existing one is used), signed, and an entry is made in an electronic ledger. In practice, however, the requirements around identity verification change based on context. Different industries, jurisdictions, and services all have different rules. (Discussing identity verification is a different blog post.) The signature attesting to the verification is where cryptographic magic comes in, and that’s where time becomes a challenge.

The Role of Electronic Ledgers in Attestation Over Time

Some people see “ledger” in the world of online data and think “blockchain.” That’s certainly one way to go about handling a ledger. Advocates argue that blockchain technologies are the One True Way to properly handle attestation over time. Everything is recorded, nothing can be deleted or changed, and everything is transparent to the entities that can access that blockchain. Of course, there are issues—such as the GDPR’s ‘Right to be Forgotten‘ that states certain data must be deleted when requested—that make using blockchain technology a bit more complicated than anyone would want (great paper about that here).

All that said, ledgers do not need to exist in any particular blockchain format. In fact, for the purposes of this discussion, the format of the ledger (beyond it being digital instead of a dusty book somewhere) doesn’t matter. What matters is that signature and the associated attestation that’s being stored.

Cryptography’s Critical Role in Digital Attestation

You can’t talk about cryptographic signatures without understanding a few salient points about cryptography. Cryptography is used to enhance the security, scalability, and manageability of systems and data, from individual devices to large-scale distributed systems. Cryptography relies on keys that encrypt data. Symmetric cryptography allows the key that encrypts the data to also decrypt the data. Asymmetric cryptography requires one key for encryption and a different key for decryption. The amount of math involved in making all this work is staggering and entirely out of my pay grade. Fortunately, there are people in the world who think developing the math for advanced cryptographic systems is amazing. Thank you for your service.

You don’t have to know the math to use the systems (whew!). But you do have to give some thought to how to manage the keys used for encryption and decryption. Securely generating, using, storing, and revoking keys is a Very Big Deal and enough to keep any IT administrator or security practitioner on their toes. While there are several models to follow, including Public Key Infrastructure (PKI), the Key Management Interoperability Protocol (KMIP), Hardware Security Modules (HSMs), and several others, when you think about those being used over the course of decades, you’ll start to see some critical weaknesses in the system.

Today’s Attestation, Tomorrow’s Cipher

As people create digital attestation and verification specifications, they focus on the technologies available today. They also presume those technologies will be available tomorrow. And they’re right. Technologies will be available tomorrow. They will probably be available next year. Ten years from now, though? Twenty? One hundred?

Now for a different consideration: if you use a model that has one key signing all the things for a few years, how hard will it be to dig through the data to find any particular signing instance? How do you identify the point in time a key might have been compromised (i.e., copied and used by an unauthorized party) and then determine all the things signed with the compromised key? Now think about this exercise for data that’s a decade old.

This isn’t a scenario that will play out with everything that includes digital attestations today. Business records are often only legally required for 5-7 years. Similarly, personal tax records also only must be stored for a limited time. But there are scenarios where the time frames required are much, much longer. In the U.S., copyright protection lasts for the lifetime of the author plus 70 years. Establishing provenance for artwork is something that can span centuries.

Standards Matter

There are efforts that are starting to poke at the edges of the problem of digital attestation and verification. One example is the Coalition for Content Provenance and Authenticity (C2PA). That’s an effort coming out of the Joint Development Foundation, a non-profit that brings together the efforts of the Content Authenticity Initiative (CAI) and Project Origin. They are focusing on the provenance of media for publishers, creators, and consumers. Another effort, coming from a different angle, is the Supply Chain Integrity, Transparency and Trust (SCITT) initiative in the IETF. Their focus is on “the ongoing verification of goods and services where the authenticity of entities, evidence, policy, and artifacts can be assured and the actions of entities can be guaranteed to be authorized, non-repudiable, immutable, and auditable.”

But in both those cases, the focus is a bit more on today and less on decades from now. This is understandable when you think about it. If you can’t solve for today, then you might not even get to next year, so focusing on immediate needs is a necessary step. Of course, that doesn’t mean you can ignore the longer term and given the state of existing efforts, the longer term is a space ready for attention.

Exploring Solutions: Hierarchical Deterministic Keys for Scalable Attestation

OK, so no, I don’t have answers, but I was definitely inspired to learn more on this topic during IETF 120. I had the best hallway conversation about the issues of time, key management, and how identity practitioners really needed to think harder about the long-term viability of the specifications under development. People were developing specifications and protocols that allowed for secure digital attestations (yay). They weren’t (aren’t) thinking about the fact that, over time, a significant percentage of the signatures will be revoked, and that has to go in the ledger as well. Long story short: ledges won’t be scalable over any length of time.

The solution to this we discussed most was Hierarchical Deterministic Keys. HDKs can be used in attestation processes to create derived keys for specific operations or time frames. This allows the system to maintain a secure and scalable method of attestation by ensuring that each key is only valid for a particular purpose or time, minimizing the risk of compromise and reducing the need for frequent key revocation. Basically, every time you wield a key, you create a derived key so you can more easily identify when that key was used. Revocation becomes less of an issue when the scope of key use is constrained. Of course, if your master key is compromised, you’re kind of doomed, but that’s the case in any key management scenario.

A Use Case: Refugees

If you’ve read this far, you probably think this is an interesting problem, but you might want a realistic example. So let’s talk about Maria.

It’s the year 2045. Maria fled her home country 20 years ago due to a conflict. She arrived in a host country, where she was granted asylum and eventually settled. It’s become home, and now she wishes to apply for citizenship. As part of the application process, she needs to prove her identity and submit a birth certificate from her country of origin.

Maria’s original birth certificate was lost during her escape, but she had a digital copy of the document stored in a digital identity wallet issued by an international organization that assists refugees. This digital birth certificate was issued with a cryptographic signature attesting to its authenticity at the time of issuance. Digital credentials ftw!

But wait. That was 20 years ago. While they used the best cryptographic techniques at the time, the quantum apocalypse happened. The agency that issued Maria’s birth certificate has had to revoke many keys used for signing documents, either due to suspected compromise or the routine expiration of cryptographic keys. Each revocation must be recorded in a ledger, which has grown significantly over time. The host country has to search through the records for millions of refugees using old credentials; not exactly a trivial exercise.

But wait, there’s more!

The digital birth certificate’s provenance must be established across multiple jurisdictions, as Maria’s host country requires confirmation from the original issuing country (which has undergone significant political and administrative changes over the years). This requires coordination between different governments, each with their own systems and cryptographic practices. 

According to the United Nations High Commissioner for Refugees (UNHCR), “By May 2024, more than 120 million people were forcibly displaced worldwide as a result of persecution, conflict, violence or human rights violations. This includes: 43.4 million refugees. 63.3 million internally displaced people.” There are many Marias in the world today, and there will only be more in the coming years.

The Data Deluge: Preparing for the Future of Digital Attestation

According to Exploding Topics, 402.74 million terabytes of data are created daily. Not all of it will be kept. Not all of it will involve digital attestations as to its authenticity. But if even 1% of that data does require digital attestations that last for at least a decade, you’re looking at 14,700.01 exabytes of data in 10 years. That’s … a lot of data.

As we’re developing specifications that allow us to do very smart things to attest to today’s data authenticity, we really need to start thinking about what that means after 10 years of new data, new signatures, data revocation, and more.

As always, I’m hoping this post will be the start of a conversation. If you have more information on the scalability of long-term attestation, please let me know!

The post IAM’s Time Problem: Why Digital Attestation Needs Work appeared first on Spherical Cow Consulting.


Evernym

Cloud Solutions for Managing Digital Certificates: Advantages and Challenges

Cloud Solutions for Managing Digital Certificates: Advantages and Challenges In today’s digital landscape, cloud solutions have... The post Cloud Solutions for Managing Digital Certificates: Advantages and Challenges appeared first on Evernym.

Cloud Solutions for Managing Digital Certificates: Advantages and Challenges In today’s digital landscape, cloud solutions have become integral to managing digital certificates. These certificates, crucial for securing communications and authenticating identities, play a pivotal role in safeguarding sensitive information. Cloud-based certificate management offers several advantages, but it also presents unique challenges ...

The post Cloud Solutions for Managing Digital Certificates: Advantages and Challenges appeared first on Evernym.

Friday, 09. August 2024

auth0

What Is Attribute-Based Access Control (ABAC) and How to Implement It in a Rails API?

There are different ways to implement an authorization system and the one you choose depends on your application's needs. Attribute-Based Access Control (ABAC) is just one of them, so let's go ahead and learn how to implement it in a Rails API.
There are different ways to implement an authorization system and the one you choose depends on your application's needs. Attribute-Based Access Control (ABAC) is just one of them, so let's go ahead and learn how to implement it in a Rails API.

uquodo

Web 3.6.0 and Mobile 3.1.3 updates

The post Web 3.6.0 and Mobile 3.1.3 updates appeared first on uqudo.

The post Web 3.6.0 and Mobile 3.1.3 updates appeared first on uqudo.

Thursday, 08. August 2024

Anonym

6 Facts About Digital Identities from One of the World’s Most-Streamed Cybersecurity Podcasts

Anonyome Labs’ CTO Dr Paul Ashley recently appeared on one of the most-streamed cybersecurity podcasts in the world, The Bid Picture with Bidemi Ologunde, to discuss some of the hottest topics in privacy and cybersecurity today.    The wide-ranging interview covered:   Key moments in the fascinating discussion include when Dr Ashley explained to Bidemi Ologunde:  […] The

Anonyome Labs’ CTO Dr Paul Ashley recently appeared on one of the most-streamed cybersecurity podcasts in the world, The Bid Picture with Bidemi Ologunde, to discuss some of the hottest topics in privacy and cybersecurity today.  
 

The wide-ranging interview covered:  Digital identities, and how Anonyome Labs has packaged them for consumers as “Sudos” in MySudo, the world’s only all-in-one privacy app, and for businesses through our decentralized identity solutions.   Surveillance capitalism and the concept that if you’re not paying for the product, you are the product, especially with companies such as Google and Meta whose main source of revenue is users’ personal data   The rapid spread of artificial intelligence and its applications for both good and evil, including in surveillance capitalism, data broking and data abuse  Privacy advice for everyday consumers to protect their personal information   The greatest privacy development of the decade – decentralized identity – and how its centerpiece – reusable credentials – are transforming the identity management space and handing consumers back control over their personal information   The urgent and ongoing need for frictionless, simple privacy tech for consumers and business, and how Anonyome Labs will continue to deliver both, building on its 10-year history pioneering in the space.   
Key moments in the fascinating discussion include when Dr Ashley explained to Bidemi Ologunde:  Sudo digital identities were inspired from proxies in cybersecurity: “We thought, how could we apply a proxy to a normal user? A Sudo is a proxy for online and offline life used for all different situations. Use your Sudo persona instead of your personal data and plastic credit cards,” Dr Ashley said.  Lots of businesses, such as plumbers and law enforcement, use MySudo to separate their professional from their personal communications: “The end-to-end encrypted functionality of MySudo is [particularly] useful for law enforcement,” Dr Ashley explained.  Some of the reasons MySudo is the world’s only all-in-one privacy app are that we don’t collect or store our users’ personally identifiable information, and the app offers disposable and customizable payment cards, phone numbers, email, and browsers all in the one app.  Parents looking to manage the risks of social media for their children should take at least these four steps, because every step is increasing their level of privacy:   Step 1: Use a VPN, such as MySudo VPN.    Step 2: Use a safe browser (MySudo has private browsers with site reputation and ad and tracker blockers built in)  Step 3: Use one of the more private search engines, such as DuckDuckGo or the new honest search engine FreeSpoke.   Step 4: Get MySudo for compartmentalization, and set up all your kids’ gaming accounts with Sudo information (never their own or your personal information).  Step 5: Use a password manager to manage and store all your different passwords (use a different password on every account).  The deeper problem with AI is that it can scan vast quantities of data and link them, identifying the user: “AI has risk of being used for surveillance capitalism but … there’s a lot of scope going forward to use AI tech constructively, such as for privacy products,” Dr Ashley said. Watch this space!  People have a lot of awareness of the need for privacy but not a lot of understanding of the technology available. “Anonyome Labs will continue to create simple products that are frictionless. One of our goals is to make the tech simple for normal users,” Dr Ashley said. One example is the MySudo browser extension which makes it easier to use Sudos on desktop.  Another aspect of the future of privacy and cybersecurity is decentralized identity or self-sovereign identity and verifiable credentials. “This important technology is giving users control of their personal data and letting the user be in the middle of any data exchange,” Dr Ashley. While DI is a big enough topic for its own episode of The Bid Picture, Dr Ashley did touch on the notion of consumers carrying reusable or verifiable credentials in an identity wallet and selectively disclosing only relevant personal information on request from services. “This is yet another tool in your privacy basket, and it’s been designed from the ground up for privacy,” Dr Ashley explained. 

 Listen to the podcast episode  
 

Anonyome Labs is the leader in proactive identity protection technologies. From verifiable credentials to VPNs and encrypted communications, we leverage our cryptography and blockchain technology expertise to take data privacy and security to the next level. Check out our podcast, Privacy Files, to hear what your peers and experts are saying about the state of member and consumer privacy in real time. 

 
The Bid Picture podcast provides an array of information about cybersecurity. It includes the latest news and facts to keep listeners up-to-date with the most current events and developments in cybersecurity.  

The post 6 Facts About Digital Identities from One of the World’s Most-Streamed Cybersecurity Podcasts appeared first on Anonyome Labs.


HYPR

HYPR and Microsoft Partner on Entra FIDO2 Provisioning APIs

Yesterday at the Black Hat conference, Microsoft announced the public preview of Entra FIDO2 provisioning APIs. HYPR worked closely with Microsoft on these critical enhancements, which make it easier for Entra customers to provision passkeys for their users. Like the EAM integration unveiled a few months ago, collaborative development of such features is essential to fuel adoption of se

Yesterday at the Black Hat conference, Microsoft announced the public preview of Entra FIDO2 provisioning APIs. HYPR worked closely with Microsoft on these critical enhancements, which make it easier for Entra customers to provision passkeys for their users. Like the EAM integration unveiled a few months ago, collaborative development of such features is essential to fuel adoption of secure, phishing-resistant authentication methods. We are honored that Microsoft named HYPR as a fully-tested vendor to help Entra customers on their FIDO2 provisioning journey.

This partnership underscores our commitment to delivering a secure and interoperable ecosystem for our customers… Their involvement has been instrumental in ensuring that the APIs are robust, versatile, and ready for real-world challenges."

– Tim Larson, Senior Product Manager on Microsoft Entra

What Are the Microsoft Entra FIDO2 Provisioning APIs?

Credential compromise is the top entry vector for attacks. Adversaries use phishing, adversary-in-the-middle (AitM), social engineering, and other tactics — increasingly aided by AI — to steal passwords and MFA tokens to log in as legitimate users. These breaches are very hard to detect until the damage is already underway. Phishing-resistant authentication based on FIDO2 standards is the single most effective way organizations can protect themselves and their users against such threats. The Microsoft Entra FIDO2 provisioning APIs encourage FIDO2 deployment and adoption by making it easier for users to enroll passkeys as an authenticator. Organizations can build their own admin provisioning clients, or work with a provider like HYPR, which leverages the new APIs.

How It Works

Using the new APIs, it’s quick and simple to provision a FIDO2 security key / passkey as a credential for Entra ID. Previously, users had to manually register their security key with Entra ID. The APIs eliminate this step, letting organizations handle the registration on behalf of their users. They work with both hardware FIDO2 keys and virtual FIDO2 security keys like HYPR.

What Does It Mean for HYPR Customers?

The new APIs further optimize the HYPR integration with Microsoft Entra ID. Leveraging their functionality streamlines provisioning of HYPR Enterprise Passkeys, making them the ideal authentication option for Microsoft Entra environments. Users simply pair their Windows workstation with HYPR and the passkey is automatically added to their Entra profile. As you can see in the below video, the entire process takes less than a minute.

Enrolling HYPR Enterprise Passkeys using the new Microsoft Entra ID FIDO2 provisioning APIs

HYPR Enterprise Passkeys

HYPR Enterprise Passkeys are Microsoft-approved and validated, FIDO Certified device-bound passkeys. They provide the assurance of a hardware key, including provenance attestation, and the convenience of a mobile authenticator app. With Enterprise Passkeys, users authenticate with a single gesture to gain access to Entra ID and all downstream apps. If they use HYPR to log into their desktop, the authenticated identity is automatically passed to Entra ID.

Enterprise Passkeys work in both fully Entra-joined and hybrid-joined environments, with multiple transport options for greater flexibility.

Learn More About HYPR and the Microsoft Entra FIDO2 Provisioning APIs

The Microsoft Entra FIDO2 Provisioning APIs are now in public preview. Read Microsoft’s technical documentation for more details about how it works. To learn more about how HYPR leverages the new APIs and HYPR Enterprise Passkeys for Entra ID, talk to our team!

 


KuppingerCole

Where Do Organizations Stand With a Comprehensive IAM Blueprint?

by Martin Kuppinger Work is still to be done to see widespread comprehensive IAM in place. In a survey run by KuppingerCole Analysts, participants reported the status of their IAM blueprint. Recent Survey Results While 40% of participants reported that they do have a comprehensive IAM blueprint in place, a large portion of participants are currently putting it in place or do not have one. 26.

by Martin Kuppinger

Work is still to be done to see widespread comprehensive IAM in place. In a survey run by KuppingerCole Analysts, participants reported the status of their IAM blueprint.

Recent Survey Results

While 40% of participants reported that they do have a comprehensive IAM blueprint in place, a large portion of participants are currently putting it in place or do not have one. 26.8% are in progress with implementing a future-ready IAM blueprint without indication of where they are in the process, and 33.1% do not have one in place.

Figure 1: Companies that have a comprehensive IAM blueprint in place; KuppingerCole Survey, August 2024, sample size 447

The Identity Fabric models a comprehensive IAM implementation

A comprehensive, future-ready IAM blueprint should follow the Identity Fabrics paradigm. An “Identity Fabric” refers to a logical infrastructure for enterprise IAM, conceived to enable access for all, from anywhere to any service while integrating advanced features.

The demands on a future-ready IAM are complex, diverse, and sometimes even conflicting. These include:

Different types of identities must be integrated quickly and securely in user-friendly flows. B2B onboarding and IAM must be facilitated in the challenging context of supply chain security. Employees (internal and external) should be able to use the devices they prefer. Secure access to working environments must be possible no matter where users and systems are located. Identities must be linked to reflect relationships within teams, companies, families, or partner organizations. Zero Trust features, such as continuously verifying access, must be included. Identities maintained in trusted organizations should be directly and reliably integrated and authorized in our IAM. Identities should be able to do business and execute payments. All relevant laws and regulations must be observed. Existing data on identities and entitlements should be applicable for analytics and artificial intelligence. All this must apply to all possible identities, beyond people, so that devices, services and networks are integrated into our next generation IAM infrastructure.

Figure 2: KuppingerCole Identity Fabric

The Identity Fabric shows the identities on the far left, the services on the far right, with capabilities required, services needed, and tools to leverage in the center. A more extensive description can be found in the 2024 Leadership Compass on Identity Fabric providers.

Today’s IAM systems meet, if at all, only a fraction of current requirements. And while organizations are moving towards more future-proof blueprints like those based off of the Identity Fabric, the current survey results suggest that there is still work to be done.

Why invest in a comprehensive IAM implementation?

There are various good reasons for organizations to invest in such a comprehensive blueprint and implement their own Identity Fabric. One is overlapping capabilities between many areas of IAM. Identity Fabrics help streamline investments and avoid unnecessary redundancies. Another is moving to a modern architecture. Identity Fabrics define such modern, future-proof architecture, including segregation of customization and orchestration of services. Another one is uniting the teams. It’s one IAM by one team, not many disparate, siloed efforts. One more to mention: Prioritization. Identity Fabrics help in prioritizing investments and analyzing the gaps.


Ontology

Self-Sovereign Identity

Empowering Digital Identity in the Modern Era In today’s digital-first world, self-sovereign identity (SSI) has emerged as a srevolutionary concept, transforming how we manage and control our digital identities. SSI empowers individuals to own and govern their online personas without relying on centralized authorities, addressing critical issues of equity, data ownership, privacy, and trust
Empowering Digital Identity in the Modern Era

In today’s digital-first world, self-sovereign identity (SSI) has emerged as a srevolutionary concept, transforming how we manage and control our digital identities. SSI empowers individuals to own and govern their online personas without relying on centralized authorities, addressing critical issues of equity, data ownership, privacy, and trust in the digital realm.

Equity and Digital Inclusion

SSI has the potential to bridge the digital divide and promote equity by providing a universal means of identity verification. This is particularly crucial for the estimated 1 billion people worldwide who lack official identification. By enabling individuals to create and manage their own digital identities, SSI can grant access to essential services, financial inclusion, and participation in the digital economy to those previously marginalized.

Data Ownership and Personal Control

A fundamental principle of SSI is that individuals should have complete ownership and control over their personal data. In traditional systems, our information is often scattered across various centralized databases, leaving us vulnerable to data breaches and unauthorized access. SSI allows users to store their data locally or in decentralized systems, granting them the power to decide what information to share and with whom.

Addressing Centralization Risks

Centralized identity systems pose significant risks to privacy and security. Data breaches in large organizations have exposed millions of individuals’ personal information. SSI mitigates these risks by eliminating single points of failure and reducing the attractiveness of centralized databases to malicious actors. By distributing identity information across a decentralized network, SSI enhances both privacy and security in the digital ecosystem.

Building Digital Trust

SSI leverages cryptographic technologies to create verifiable credentials that can be trusted without relying on a central authority. This approach enables secure and private digital interactions between individuals and organizations, fostering a more trustworthy online environment. Users can selectively disclose only the necessary information for each interaction, maintaining their privacy while still providing verifiable proof of their claims.

AI and Proof of Humanity

As artificial intelligence becomes more sophisticated, distinguishing between human and AI-generated content or interactions becomes increasingly challenging. SSI can play a crucial role in providing proof of humanity, ensuring that digital interactions are genuinely human-to-human when necessary. This has implications for combating fraud, spam, and maintaining the integrity of online communities and marketplaces.

Overcoming Implementation Challenges

While SSI offers numerous benefits, its widespread adoption faces challenges such as technical complexity, regulatory hurdles, and the need for interoperability standards. However, as awareness grows and technologies mature, SSI has the potential to revolutionize how we interact in the digital world, putting individuals back in control of their digital selves.In conclusion, self-sovereign identity represents a paradigm shift towards a more equitable, secure, and user-centric digital identity ecosystem. By addressing issues of data ownership, privacy, and trust, SSI empowers individuals and paves the way for a more inclusive and resilient digital future.

Self-Sovereign Identity was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Civic

Tokenized Identity: Unmasking Robots and Sybils With Jeremy Dillingham, Passport.xyz

In this episode of Tokenized Identity, Titus Capilnean, our VP of Go-To-Market, speaks with Jeremy Dillingham, Passport.xyz. They explore identity use cases and the required levels of verification, bot blocking, Sybils, and KYC requirements for protocols, tokens and contracts. Jeremy is part of Passport.xyz, which was formerly Gitcoin Passport. Passport.xyz is focused on empowering digital, […]

In this episode of Tokenized Identity, Titus Capilnean, our VP of Go-To-Market, speaks with Jeremy Dillingham, Passport.xyz. They explore identity use cases and the required levels of verification, bot blocking, Sybils, and KYC requirements for protocols, tokens and contracts. Jeremy is part of Passport.xyz, which was formerly Gitcoin Passport. Passport.xyz is focused on empowering digital, […]

The post Tokenized Identity: Unmasking Robots and Sybils With Jeremy Dillingham, Passport.xyz appeared first on Civic Technologies, Inc..


Verida

Own your AI future

Secrets should be kept with those you trust, like data. Imagine an AI like ChatGPT with 100% end-to-end privacy that works for you only. One private vault, multiple data sources. Your personal data secured. Your AI trained under encryption, to know you and support you. We have built privacy preserving infrastructure for hyper personal AI experiences. To guarantee your safet

Secrets should be kept with those you trust, like data.

Imagine an AI like ChatGPT with 100% end-to-end privacy that works for you only.

One private vault, multiple data sources. Your personal data secured.

Your AI trained under encryption, to know you and support you.

We have built privacy preserving infrastructure for hyper personal AI experiences. To guarantee your safety. And autonomy.

Take the first step. Write your own story. Own your AI future.

Join the waitlist at Verida.ai

Own your AI future was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


Elliptic

Crypto regulatory affairs: Swiss regulator publishes guidance for stablecoin issuers and banks offering guarantees

Switzerland’s financial sector watchdog has released regulatory guidance for issuers of stablecoins, and for the banks providing them guarantees against default. 

Switzerland’s financial sector watchdog has released regulatory guidance for issuers of stablecoins, and for the banks providing them guarantees against default. 


uquodo

Your guide to KYC in Oman

The post Your guide to KYC in Oman appeared first on uqudo.

The post Your guide to KYC in Oman appeared first on uqudo.


PingTalk

Identity-Centric Finance Regulations - Asia-Pacific

See which financial regulations in Asia, Japan, and the Pacific have stringent identity standards, and how identity access management helps achieve compliance.

In Asia, Japan, and the Pacific (APJ), the heterogeneity of identity-centric bank and finance regulations distinguishes this region from the rest of the world. Given the many countries and diverse makeup of the region, the regulatory framework is more favorable to innovation and identity plays a more central role in driving such progress. 


Patricia Or and Vicky Cheng, Regulatory Affairs Specialist at Bloomberg, explain the distinctive nature of financial services regulations in the region, stating:

 

Wednesday, 07. August 2024

1Kosmos BlockID

What Is 3FA (Three-Factor Authentication)?

How secure are you in a world where data breaches and cyber-attacks make headlines daily? You might think you’re doing enough if you’ve already upgraded to Two-Factor Authentication (2FA). However, the cyber world and its threats are evolving—enter Three-Factor Authentication (3FA). This enhanced security protocol adds an extra layer of armor, making unauthorized access even … Continued The post

How secure are you in a world where data breaches and cyber-attacks make headlines daily? You might think you’re doing enough if you’ve already upgraded to Two-Factor Authentication (2FA). However, the cyber world and its threats are evolving—enter Three-Factor Authentication (3FA). This enhanced security protocol adds an extra layer of armor, making unauthorized access even more complex. In this comprehensive guide, we dive deep into the what, why, and how of 3FA, providing insights that can help you bolster your cybersecurity posture.

The Foundations of 3FA

What is 3FA?

Three-Factor Authentication (3FA) is a security protocol that adds an extra layer of protection on top of the traditional Two-Factor Authentication (2FA). 3FA requires users to present three identifying factors before accessing an account, app, or system.

This knowledge factor could involve something the user knows (password), something the user has (a used mobile phone or device), and something the user is (biometric data).
The concept behind 3FA is straightforward: The more the three authentication factors are involved, the harder it is for unauthorized users to gain access. It’s an example of a comprehensive approach to security that makes extra steps in the verification process more robust by reducing the chances of a breach.

The Evolution from 2FA to 3FA

Two-factor authentication (2FA) has been the industry standard for securing accounts, stolen passwords, and systems. However, as cyber threats grow in sophistication, there is an increasing need for more rigorous security measures.

3FA evolved as a response to this need, incorporating an additional layer of security beyond password, making it even more difficult for unauthorized users to access accounts.
This third layer could be various things, such as a fingerprint scan, a biometric identifier, or a behavioral pattern. It depends on the system in the security question and its security requirements. By adding this additional layer, 3FA significantly raises the bar for attackers trying to compromise a system.

Who Needs to Know About 3FA?

3FA is increasingly relevant to a broad audience. Organizations dealing with sensitive or classified information are generally considered the most obvious candidates for 3FA.

This includes governmental agencies, healthcare institutions, government agencies, and financial firms. However, any organization bolstering its cybersecurity posture can benefit from implementing 3FA.

Moreover, individual users with a heightened need for security, such as celebrities, businesses, or public figures, can also benefit from 3FA. Even the general public is beginning to appreciate more advanced security protocols as awareness of cyber threats grows.

How Does 3FA Work?

The mechanics of 3FA are a natural extension of 2FA, with the difference being the additional inherence factor of a third factor for validation. Like 2FA, the user must provide two forms of identification and a third, distinct type of identity-confirming credentials for verification.

The 3FA Process Explained

Typically, 3FA starts with the user entering a username and a password. Next, a secondary device, like a smartphone, receives a time-sensitive code.

After entering this code, the user or phone must provide a third form of identification: a fingerprint or retina scan, a voice recognition test, or some other form of biometric verification. Only after successfully passing through all three gates does the user gain access to the system or account.

Types of Factors in 3FA

The factors used in 3FA usually fall into at least one element of three categories: knowledge-based (something you know), possession factor-based (something you have), and inherence-based (something you are).

Knowledge-based factors include passwords and PINs, possession-based factors encompass mobile devices or smart cards, and possession and inherence-based elements refer to biometrics like fingerprints or iris scans.
Different combinations of these categories can be employed depending on the level of security required. It’s important to note that combining the three factors should be distinct to maximize the security benefits.

3FA Protocols and Mechanisms

Several protocols and mechanisms support the implementation of 3FA—these range from standard protocols like OAuth and OpenID to specialized options for high-security environments.

Additionally, hardware tokens and biometric fingerprint scanners might be integrated into the system for the third factor.

The appropriate protocol authentication method and mechanism selection depends on various factors, including the organization’s existing infrastructure, user needs, authentication factors, and specific security requirements.

Benefits of 3FA

 

3FA offers many advantages, making it a worthy investment for organizations seeking robust security solutions. Not only does it dramatically reduce the chances of unauthorized access, but it also aligns well with various regulatory standards.

Improved Security Posture

Undoubtedly, the most significant benefit of 3FA is its enhanced security. By requiring three distinct verification forms to authenticate themselves before accessing accounts, 3FA makes it exponentially more challenging for unauthorized users to gain access. This is particularly beneficial for organizations handling sensitive data with high stakes for identity theft.

Regulatory Compliance

Another advantage of 3FA is its alignment with various regulatory standards. For organizations that must comply with guidelines such as GDPR, HIPAA, or PCI-DSS, implementing 3FA can aid in achieving and maintaining compliance. It is a tangible demonstration of an organization’s commitment to safeguarding user data.

User Experience and Usability

While adding more steps to the login process might seem like a burden, many modern 3FA solutions are designed with user experience in mind. Biometric authentication data, for instance, can be quicker and more natural to provide than entering a complex password. As a result, the additional security layer does not necessarily come at the expense of usability.

Implementing 3FA

 

 

Technical Requirements

Implementing 3FA will inevitably require some technological adjustments. At a minimum, organizations must ensure they have the infrastructure to support this type of security measure. This could include software that supports multi-factor authentication protocols and hardware like biometric scanners or token generators.
A secure and reliable network is also essential for 3FA to function optimally. While cloud-based solutions are available, organizations must maintain network security protocols to minimize potential vulnerabilities.

Costs and Budgeting

The implementation of 3FA involves both upfront and ongoing costs. Upfront costs may include the purchase of hardware and software and the expenses related to system integration. Ongoing costs can encompass maintenance, updates, and possibly licensing fees.
Budgeting for 3FA should consider the direct costs and the potential savings from reduced security incidents. While the initial investment can be significant, the long-term benefits often justify the high level of expenditure.

Common Pitfalls and How to Avoid Them

While 3FA offers enhanced security, poor implementation can undermine its effectiveness. One common pitfall is inadequate training staff, leading to user errors that compromise safety. Proper training and awareness programs can mitigate this risk.
Another issue is over-reliance on one type of authentication or one authentication factor alone, such as using multiple biometric identifiers, which defeats the purpose of multi-factor authentication. A diversified approach using various types of authentication factors is recommended.

Potential Challenges and Criticisms The Complexity Issue

One of the criticisms of 3FA is the added complexity it introduces. Critics argue that while the system is more secure than a one-time password, it becomes more cumbersome. However, many 3FA solutions focus on improving the user experience to mitigate this issue, and the benefits of heightened security often outweigh the downsides.

Reliance on Technology

Another concern is the heavy reliance on technology, such as smartphones or other biometric authentication devices, which could malfunction or be lost.
This reliance creates a potential weak link in the security chain. To counter this, backup options and alternative authentication methods should be part of any app or comprehensive 3FA strategy.

User Acceptance and Training

As with any new system, user acceptance is often a hurdle. People generally resist change, particularly regarding technology that requires them to alter their habits. Effective training and awareness programs can go a long way in facilitating smooth adoption.

Emerging Trends in 3FA

As the digital identity landscape evolves, so too does 3FA. One emerging trend is the integration of artificial intelligence to improve the efficiency and accuracy of the authentication process. Machine learning algorithms could, for example, analyze user behavior to provide a more dynamic and secure form of authentication.
Integrating more advanced biometrics and AI offers promising avenues for 3FA’s development. Beyond facial recognition, fingerprints, and iris scans, new forms of biometric data, such as heart rate or brainwave patterns, are being explored.

Blockchain technology has also been touted as a possible element in the future of 3FA. It offers the potential for decentralized authentication methods that are not only secure but also more user-friendly. The immutable nature of blockchain records can further enhance the security aspects of 3FA transactions.

To wrap it all up, 3FA offers a heightened level of security that is becoming increasingly essential in our digitalized world. The potential applications and benefits are vast, from government agencies to everyday internet users. While implementing 3FA involves a range of logistical and technological considerations, the upside in terms of cybersecurity makes it a worthy investment. If you’re committed to taking your organization’s digital security to the next level, don’t hesitate to contact our team today.

The post What Is 3FA (Three-Factor Authentication)? appeared first on 1Kosmos.


Microsoft Entra (Azure AD) Blog

Public preview: Microsoft Entra ID FIDO2 provisioning APIs

Today I'm excited to announce a great new way to onboard employees with admin provisioning of FIDO2 security keys (passkeys) on behalf of users.   Our customers love passkeys as a phishing-resistant method for their users, but some were concerned that registration was limited to users registering their own security keys. Today we’re announcing the new Microsoft Entra ID FIDO2 provisioning

Today I'm excited to announce a great new way to onboard employees with admin provisioning of FIDO2 security keys (passkeys) on behalf of users.

 

Our customers love passkeys as a phishing-resistant method for their users, but some were concerned that registration was limited to users registering their own security keys. Today we’re announcing the new Microsoft Entra ID FIDO2 provisioning APIs that empowers organizations to handle this provisioning for their users, providing secure and seamless authentication from day one.

 

While customers can still deploy security keys in their default configuration to their users, or allow users to bring their own security keys which requires self-service registration by a user, the APIs allow keys to be pre-provisioned for users, so users have an easier experience on first use.

 

Adopting phishing-resistant authentication is critical - attackers have increased their use of Adversary-in-the-Middle (AitM) phishing and social engineering attacks to target MFA-enabled users. Phishing-resistant authentication methods, including passkeys, certificate-based authentication (CBA), and Windows Hello for Business, are the best ways to protect from these attacks.

 

Phishing-resistant authentication is also a key requirement of Executive Order 14028 which requires phishing-resistant authentication for all agency staff, contractors, and partners.  While most federal customers use preexisting smartcard systems to achieve compliance, passkeys provide a secure alternative for their users looking for improved ways to securely sign in. With today’s release of admin provisioning, they also have a simplified onboarding process for users.

 

With the Microsoft Entra ID FIDO2 provisioning APIs organizations can build their own admin provisioning clients, or partner with one of the many leading credential management system (CMS) providers who have integrated our APIs in their offerings.

 

Tim Larson, Senior Product Manager on Microsoft Entra, will now walk you through this new capability that will help in your transition towards phishing-resistant multifactor authentication (MFA).    

 

Thanks, and please let us know your thoughts!

 

Alex Weinert

 

--

 

Hello everyone,

 

Tim here from the Microsoft Entra product management team. I’m excited to share with you our new passkey (FIDO2) provisioning capabilities in Entra ID!

 

Back in May we shared how we’re expanding passkey support in Microsoft Entra ID with the addition of device-bound passkey support in Microsoft Authenticator. As part of our commitment to provide more passkey capabilities we’ve enhanced our passkey (FIDO2) credential APIs to make onboarding security keys for users more convenient.

 

How does it work?

 

With the enhancements made to our passkey (FIDO2) credential APIs you can now request WebAuthn creation options from Entra ID and use the returned data to create and register passkey credential on behalf of a user.

 

To simplify this process, three (3) main steps are required to register a security key on behalf of a user.

 

 

 

Request creationOptions for a user: Entra ID will return the necessary data for your client to provision a passkey (FIDO2) credential. This includes information like user information, relying party, credential policy requirements, algorithms, and more. Provision the passkey (FIDO2) credential with the creationOptions: Using the creationOptions utilize a client or script which supports the Client to Authenticator Protocol (CTAP), to provision the credential. During this step you’ll need to insert a security key and set a PIN. Register the provisioned credential with Entra ID: Utilizing the output from the provisioning process, provide Entra ID with the necessary data to register the passkey (FIDO2) credential for the targeted user.

 

Build your own app or use a CMS vendor offering

 

In addition to providing the tools above, Microsoft has also collaborated with 10 leading vendors in the CMS space to integrate the new FIDO2 provisioning APIs. These vendors have rigorously tested and are fully knowledgeable in the new APIs, and are available to help you in your provisioning journey if creating your own integration isn’t something you want to do.

 

This partnership underscores our commitment to delivering a secure and interoperable ecosystem for our customers. These vendors represent a diverse range of CMS solutions, each bringing unique insights and expertise to the table. Their involvement has been instrumental in ensuring that the APIs are robust, versatile, and ready for real-world challenges.

 

As we roll out the public preview, we are proud to announce that these vendors have pledged their support, integrating the APIs into their platforms. This collaboration not only enhances the security landscape but also paves the way for seamless adoption across various industries.

 

 

 

What’s next?

 

This public preview is the next step in our passkey journey and we’re gearing up for even more passkey (FIDO2) provisioning features. We’re looking forward to building provisioning capabilities into the Entra admin center which will empower help desk and other admins the ability to directly provision FIDO2 security keys for users.

 

To learn more about everything discussed here, check out how to enable passkeys (FIDO2) for your organization and review our Microsoft Graph API documentation. Reach out to your preferred CMS provider to learn more about their integrations with the Microsoft Entra ID FIDO2 Provisioning APIs.

 

Thanks,

Tim Larson

 

 

Read more on this topic 

Public preview: Expanding passkey support in Microsoft Entra ID - Microsoft Community Hub

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

 


Ontology

Securing Love in the Digital Age

How Decentralized Identity Can Revolutionize Dating Apps The recent analysis of 15 popular location-based dating (LBD) apps revealed alarming privacy and security vulnerabilities. These issues expose users to risks ranging from stalking and harassment to identity theft. Decentralized identity solutions, particularly ONT ID from Ontology Network, offer a promising approach to mitigate these c

How Decentralized Identity Can Revolutionize Dating Apps

The recent analysis of 15 popular location-based dating (LBD) apps revealed alarming privacy and security vulnerabilities. These issues expose users to risks ranging from stalking and harassment to identity theft. Decentralized identity solutions, particularly ONT ID from Ontology Network, offer a promising approach to mitigate these concerns.

Easy Account Creation and Verification

Problem: The study found that 7 out of 15 apps only require an email address to create an account, making it easy for adversaries to create fake profiles.

Solution: ONT ID can provide verifiable credentials for account creation without storing sensitive data on the app’s servers. This allows for robust user authentication while maintaining privacy.

Excessive Personal Data Exposure

Problem: Many apps expose large amounts of personal data in the user interface, including sensitive information like ethnicity and sexual orientation.

Solution: With ONT ID, users can selectively share only the necessary attributes for matchmaking. The decentralized nature ensures that users retain control over their personal information, reducing the risk of data breaches and unauthorized access.

Inadvertent Data Leaks

Problem: The study uncovered significant API traffic leaks, exposing data that users believed to be hidden.

Solution: By leveraging blockchain technology, ONT ID can ensure that only explicitly shared data is accessible. This aligns the user’s expectations with actual data exposure, eliminating inadvertent leaks through API traffic.

Location Privacy Vulnerabilities

Problem: 6 apps were found to be susceptible to exact location tracking through trilateration attacks.

Solution: ONT ID can implement privacy-preserving location verification. Users could prove their proximity to potential matches without revealing exact coordinates, protecting against stalking and location-based attacks.

Lack of User Control

Problem: Many apps provide limited options for users to control what data they share.

Solution: Decentralized identity enables granular consent management. Users can decide exactly what information is shared, with whom, and for how long, enhancing privacy and user agency.

By adopting decentralized identity solutions like ONT ID, dating apps can significantly enhance user privacy and security. This approach not only addresses the specific vulnerabilities identified in the study but also aligns with data protection principles such as data minimization and user control.

As the digital dating landscape evolves, integrating decentralized identity could be the key to fostering safer, more authentic connections online. It’s time for dating apps to prioritize user-centric privacy measures, ensuring that the quest for love doesn’t come at the cost of personal security.

Securing Love in the Digital Age was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Policy Based Access Management

by Martin Kuppinger Efficient, effective management of access controls from infrastructure to applications remains an aspiration for enterprises. The main drivers of this goal include the need for strengthening the cybersecurity posture, efficiency gains in managing access controls, the need for consistency in access controls across multiple solutions and layers, and regulatory compliance. Most or

by Martin Kuppinger

Efficient, effective management of access controls from infrastructure to applications remains an aspiration for enterprises. The main drivers of this goal include the need for strengthening the cybersecurity posture, efficiency gains in managing access controls, the need for consistency in access controls across multiple solutions and layers, and regulatory compliance. Most organizations today struggle with a mixture of point solutions for managing access controls, many of these relying on static entitlements causing massive work and tending to become inaccurate. A consistent, policy-based solution for managing access controls ensures that the right people have the right access, at the right time, from the right place.

PingTalk

The Rise of Fraudulent Carriers: A Growing Threat to Freight Brokers

Truckstop is a trusted platform for brokers, shippers, and carriers. With Ping, Truckstop protects users from fraud while enhancing efficiencies and reliabilities.

Strategic theft continues to threaten the supply chain, with reported loss values exceeding $34 M in Q2 of 2024 alone. Specifically, the rise of fraudulent carriers in the freight market is posing significant challenges for freight brokers. This rampant increase is costing brokers lost revenue and damaged reputations. 

 

Fraudulent carriers often use fake credentials and stolen identities to secure loads, only to disappear with the goods, leaving brokers to deal with the fallout. As for carriers, they risk stolen identities which can result in personal and professional financial losses and reputational damage. This trend also erodes trust within the industry, making it harder for brokers to confidently engage with new carriers. For carriers, identity theft can limit their ability to secure the loads they need to keep their business moving. 


Verida

Revamped Verida Network Explorer: Discover and Manage Your Digital Identity

Experience improved features for seamless navigation and discovery We are excited to unveil the newly revamped Verida Network Explorer, your comprehensive gateway to exploring identity and data on the Verida Network. As a layer zero DePIN, Verida secures your private data and provides confidential compute for secure personal AI assistants. Our goal is to empower users and developers by providing an
Experience improved features for seamless navigation and discovery

We are excited to unveil the newly revamped Verida Network Explorer, your comprehensive gateway to exploring identity and data on the Verida Network. As a layer zero DePIN, Verida secures your private data and provides confidential compute for secure personal AI assistants. Our goal is to empower users and developers by providing an enhanced tool to gain a thorough understanding of decentralized identities (DID) and activities on the Verida Network.

Discovering the Verida Network Explorer

The Verida Network Explorer offers a variety of features that allow you to gain valuable insights into your digital identity. Here’s a closer look at what you can do with this tool:

1. Search for Your Identity

The foundation of your digital identity is your unique DID (Decentralized Identifier) address. The Network Explorer allows you to easily search your identity by using your identifier (DID) within the Verida Network. Manage your Identity with your private key and take control of your digital world.

Developers: Learn more about Accounts and Identity Users: Create your DID with Verida Wallet 2. Examine Your Public DID Document and Metadata

Once you’ve located your Identity, the Network Explorer provides you with a view of your DID document, hosted on the decentralized Verida Network. Following the W3C standards, your DID document contains information describing the DID and its associated metadata, including associated application contexts.

3. Storage Node Distribution

Node Distribution section gives a geographical representation of the distribution of nodes across the globe. It helps in identifying where the nodes are located and ensures transparency in the storage and management of your data.

4. Storage Node Details

The List of Nodes section provides information about each node in the network, where you can see the node name, region, available slots, and status. You can also click on a node to open a dedicated page containing the node details. This information is crucial for developers and users to understand the network’s structure and performance.

Developers: Learn more about data storage on Verida Network 5. Overview of Storage Nodes on Verida Network (Coming Soon)

Coming soon is the Overview section, which provides a snapshot of the network’s storage capacity and utilization. You can see how much data is being used and how much capacity is available in the network. This helps in understanding the overall health and efficiency of the network.

Secure Your Data with Verida

Verida provides fast, low-cost infrastructure for private data and personal AI applications. As the first self-sovereign data network, Verida enables developers to build applications where users can manage their identity, crypto, data, and reputation.

You have a power to own, control, and delete every part of your digital footprint.

Unlocking the Power of Transparency

The Verida Network Explorer is your window to discover and manage your digital identity effectively. Whether you’re a user seeking insights into your data or a developer integrating with the Verida Network, this tool is your go-to resource.

To help you make the most of the Verida Network Explorer, we’ve prepared a comprehensive User Guide with step-by-step instructions and tips. For developers, there are technical docs for learning more about accounts and identity, application contexts, and data storage on the Verida Network.

Thank you for being a part of the Verida community as we shape the future of digital identity together. Stay tuned for more exciting updates!

About Verida

Verida is a pioneering decentralized data network and self-custody wallet that empowers users with control over their digital identity and data. Utilizing cutting-edge technology such as zero-knowledge proofs and verifiable credentials, Verida offers secure, self-sovereign storage solutions and innovative applications for various industries. We are also at the forefront of developing privacy-preserving personalized AI solutions. For more information, visit Verida.

Verida Missions | X/Twitter | Discord | Telegram | LinkedInLinkTree

Revamped Verida Network Explorer: Discover and Manage Your Digital Identity was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 06. August 2024

KuppingerCole

Identity Governance and Administration

by Nitish Deshpande This Leadership Compass Identity Governance and Administration (IGA) provides an overview of the IGA market and a compass to help you find a solution that best meets your needs. It examines solutions that provide both identity lifecycle management and access governance capabilities. Solutions have been assessed based on certain defined required core capabilities that can suppor

by Nitish Deshpande

This Leadership Compass Identity Governance and Administration (IGA) provides an overview of the IGA market and a compass to help you find a solution that best meets your needs. It examines solutions that provide both identity lifecycle management and access governance capabilities. Solutions have been assessed based on certain defined required core capabilities that can support organizations in activities such as provisioning, management of entitlements, configuration and enforcement of policies, access certifications, access reviews and user self-service among other. It provides an assessment of the capabilities of these solutions to meet the needs of all organizations to monitor, assess, and manage these risks

BlueSky

Bluesky Welcomes Mike Masnick to Board of Directors

We’re thrilled to announce that Mike Masnick has joined Bluesky’s Board of Directors.

We’re thrilled to announce that Mike Masnick has joined Bluesky’s Board of Directors. He is the author of the Protocols, not Platforms paper that first inspired the Bluesky initiative, and he is the founder and editor of Techdirt, among other accomplishments.

Mike has been an early supporter of Bluesky’s mission to create a global, open social network, as full of possibility as the early web. In the past, we’ve gone to Mike for inspiration and advice already, and formalizing that relationship is the natural next step. His deep understanding of our approach — iterating towards widespread adoption while enabling trust & safety in a decentralized system — makes him an invaluable addition to our board.

As Bluesky’s network of more than 6 million users continues to grow, we’re excited to tap into Mike’s expertise as a reporter, editor and publisher. His familiarity with how policy, technology, and legal issues affect a company’s ability to innovate and grow is directly relevant to Bluesky, an open social network challenging incumbents who have kept innovation locked behind closed doors for the last decade.

“Mike's work has been an inspiration to us from the start,” says Jay Graber, CEO of Bluesky. “Having him join our board feels like a natural progression of our shared vision for a more open internet. His perspective will help ensure we're building something that truly serves users as we continue to evolve Bluesky and the AT Protocol.”

Mike shares his enthusiasm below:

“I’m excited to join the Bluesky board and to support its vision of building an open social network. Over the last few years, I’ve been thrilled to see how the Bluesky team has turned these ideas into reality, and I look forward to helping the company continue to build a better internet.”

Mike’s balanced perspective and strong advocacy for open networks will play a pivotal role in shaping the future of Bluesky and the AT Protocol. You can follow Mike Masnick on Bluesky here.

Monday, 05. August 2024

Spruce Systems

Who Should Build a Digital Wallet?

A guide for digital credential issuers deciding between an off-the-shelf digital wallet and custom wallet software.

Digital wallets manage digital credentials, assets, or authorizations. The most familiar digital wallet is probably Apple Wallet, which hundreds of millions of people use to store and use virtual credit cards and event tickets. For the growing number of states leading the shift towards digitizing identification documents, the most important role of digital wallets is storing and controlling users’ state-issued driver’s licenses and, soon, other state-issued identification, certifications, or licenses.

Digital wallets are primarily made for smartphones, where they interface with secure hardware and cryptographic software to ensure that credentials they store are secure and trusted. So it’s natural that the most widely-used wallet software is created by hardware and operating system creators (known as “original equipment manufacturers,” or OEMs) like Google and Apple, the driving forces behind most of the world’s smartphones. They know the hardware, and they have very smart teams.

However, default OEM digital wallets do have disadvantages. If you’re an enterprise or government hoping to give your users (or residents) the full benefit of the transition to digital identity, there are good reasons to build your own digital wallet software rather than relying on OEM wallets to have all the features you need. 

In brief, we believe there are two main reasons for an entity to build its own digital wallet. First, if your brand is highly trusted by end users, as might be the case for a state issuing digital driver’s licenses, building your own can dramatically impact adoption rates. The second major consideration is whether your in-house option would represent a big improvement in usability over the manufacturers’ default option, for instance, in applications requiring highly tailored features. 

The Off-The-Shelf Option

There are many benefits to using an existing OEM wallet. Most clearly, it requires fewer resources from your team, both for development and support. There is already an enormous user base with the Apple or Google Wallet already installed.

Even more importantly, users of the OEM wallets will already be familiar with how to use these wallets and the nuances of the user experience. By the time they use their wallet to present the credentials your organization creates, they will have already used these wallets in their day-to-day lives for shopping or tickets. The “tap to pay” user experience that’s now widespread with phone-based payment apps is a very accessible “on-ramp” for using digital identity credentials, but especially when dealing with vital interactions involving official documents used by nearly the entire population, accessibility is paramount.

Big-name wallets also have privileged access to some of a mobile device’s hardware capabilities.  These can unlock additional, and sometimes important, functionality. That includes advanced security features, such as Near Field Communication (NFC), which can make verification more streamlined. NFC functionality allows for a verification interface to quickly pop up as a user holds their smartphone close to a verifier’s reader device, rather than requiring the user to open a separate application to initiate a verification interaction. This can make certain user interactions faster and more seamless for end users. Currently, this functionality is only supported for OEM wallet implementations or those who receive special permissions from the OEM providers to implement them in applications.

There are also convenience or security features that might only be possible for software created by device manufacturers themselves, such as letting users present a credential when a phone is locked or making certain credentials usable even when a device battery is nearly empty. For credentials that might be vital in unexpected or unusual circumstances, such as medical certifications, these features could trump other considerations.

The Advantages of an In-House Digital Wallet Design

In some cases, the advantages of manufacturer software may be outweighed by the greater flexibility, hands-on support, and tailoring to specific use cases made possible by wallet software designed specifically for your users.

Above all, creating your own wallet software is the best way to ensure you can give users exactly the features and experience they want, quickly, in an appealing, easy to use, and trusted package. Longer term, controlling your own software also increases your ability to build a relationship with your end users and get the most out of advances in digital identity, instead of being beholden to the product roadmaps of large technology companies.

The biggest tech companies, remember, serve an immense user base, and outside requests for changes or updates are handled by a comparatively small product management team. If you find your wallet needs a specific feature not already offered by an off-the-shelf wallet, you and your users could be waiting for request updates behind hundreds of other priorities.

The ability of large tech companies to serve such a huge and diverse customer base relies on “App Stores” that offer independently-developed apps. The built-in assumption is that when a smaller group of users have highly specific needs, someone will build a tailored solution for them.

That matters because current OEM wallets are designed for a generic baseline user, and only have a fraction of the functionality that digital credential systems will make possible. Most notably, wallets from big tech companies currently only support a limited subset of the credentials that can be issued digitally.

There are also significant nuances to how a digital wallet communicates with credential providers, secures user information, and handles various identity formats, which have downstream impacts on security and user experience. In the case of government identity systems, that can impact the ability to link additional digital services to an identity credential, or to control processes like renewing digital credentials.

Wallets may also need different security standards depending on their application – a pass to a secure corporate facility is generally more sensitive than a concert ticket. So a highly secure corporate entity is likely to want to build its own wallet, with more rigorous onboarding. A consumer-focused app for concert tickets, by contrast, might want a less rigorous process that prioritizes ease of use. Some digital wallets may even want to incorporate “decentralized” identity signals like social media accounts for lower-security or community-based verifications.

Data policy is another reason to consider a home-grown wallet solution. The State of California collaborated with SpruceID to create its own digital wallet software, which, among other benefits, allowed them to create a user privacy policy different from the big tech companies’ standard agreement. In some cases this might be necessary to fully comply with local privacy regulations. It may also have benefits for user adoption: some users may be skeptical of the privacy practices of large conglomerates and more likely to trust a wallet created by an official body.

Many of these nuances must be implemented in the wallet software itself, whether specific user-facing interactions or back-end architecture. However, it’s difficult to push for any specific changes or features from a big tech company. Even if you’re representing the government of a sizable state, it's like trying to steer a massive aircraft carrier; it takes significant effort and time to change direction even if there is mutual interest.

Flexibility for a Dynamic Ecosystem

The direct advantages of building your own digital wallet are significant and can be expected to lead to better features, higher trust, more adoption, and more satisfied users than relying on OEM software. However, owning the development process grants another, potentially even more important advantage: helping ensure that your team and users can support the latest advancements in digital credential technologies across different industries, and not just the ones that make it to mass-market deployment via OEM wallets.

For example, state DMVs largely favor the “mobile driver’s license” family of digital credential standards (ISO/IEC 18013-5 mDL), and OEM wallets also privilege the mDL standard. But many educational institutions, for example, prefer the OpenBadges standard by the 1EdTech educational consortium, an alternative format built on the W3C’s Verifiable Credentials. Numerous other use cases are built using W3C Verifiable Credentials, such as Microsoft’s Entra Verified ID product, C2PA for content authenticity (a specification supported by Adobe, OpenAI, and Google), and GS1’s digital supply chain integrity efforts. Further, the EU Digital Identity efforts include SD-JWTs.

The downsides are on the other side of the coin: it is not the most economical nor convenient option for the organization to own the development of a digital wallet. Today, it takes managers with an understanding of emerging digital credential technologies, and vendors with specialized skill sets to make it happen well. This is one reason why we build open source software to allow any organization to build a wallet on a strong base set of lego blocks.

It’s still early in the development of these tools, and a plethora of solutions are emerging simultaneously. The ability to tailor your in-house wallet to a non-mDL standard is just one example of the flexibility that doing it yourself allows.

Different industries will prefer different ways to handle their exchanges of authentic data, and we believe that the market has progressed to the point where bottom-up development is more likely than “one format to rule them all.” Therefore, those looking to enable functionalities across different industries may need to consider building their own wallets to ensure support for their own use cases, especially if they are cross-vertical. For instance, a shipping authorization that refers to both a cargo truck (supply chain) and its driver (personal identity) could require tailored features. Providing strong support for end-to-end use cases may require integrating many different technologies, something custom software excels at but is less common for stock OEM capabilities.

Setting the Right Priorities

We believe that, ultimately, the decision to use an OEM wallet or to build (or enhance an existing app into) a new one should be based on a few specific factors.

First, program leaders should decide which option provides the most value to end users. So for instance, if a home-grown wallet has the potential to make the end-to-end experience ten times better than an OEM wallet's, that would be a major reason to build your own.

Second, ask which approach best meets expectations for usability, security, and privacy. That can include nuanced technical considerations but also the more basic question of branding. If your user base is more likely to trust a wallet carrying your brand, such as a state government, that might suggest an in-house wallet will drive better adoption.

The third big-picture consideration is whether your solution is sustainable in the long term. For instance, building your own wallet might be a mistake if you can’t guarantee an ongoing budget not only for development, but support and updates for, quite likely, many years to come. When vendors work on the same set of technology standards, there is less lock-in, more competitive pricing, and better parallelization without sacrificing overall interoperability.

At SpruceID, we help governments and enterprises navigate these complex considerations, and our products simplify this whole process. If you’d like to discuss a specific use case for digital wallets, and would like us to weigh in on using OEM or building your own, please schedule a chat.

Contact Us

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Microsoft Entra (Azure AD) Blog

Microsoft Entra ID Governance licensing clarifications

In the past few weeks, we’ve announced the general availability of Microsoft Entra External ID and Microsoft Entra ID multi-tenant collaboration. We’ve received requests for more detail from some of you regarding licensing, so I’d like to provide additional clarity for both of these scenarios.   One person, one license   Included in the first announcement of more multi-tenant org

In the past few weeks, we’ve announced the general availability of Microsoft Entra External ID and Microsoft Entra ID multi-tenant collaboration. We’ve received requests for more detail from some of you regarding licensing, so I’d like to provide additional clarity for both of these scenarios.

 

One person, one license

 

Included in the first announcement of more multi-tenant organization (MTO) features to enhance collaboration between users, we stated that only one Microsoft Entra ID P1 license is required per employee per multi-tenant organization. Expanding on that, the term “multi-tenant organization” has two descriptions: an organization that owns and operates more than one tenant; and a set of features that enhance the collaboration experience for users between these tenants. However, your organization doesn’t have to deploy those capabilities to take advantage of the one person, one license philosophy. An organization that owns and operates multiple tenants only needs one Entra ID license per employee across those tenants. The same philosophy applies to Entra ID Governance: the organization only needs one license per person to govern the identities of these users across these tenants.

 

Note that this philosophy includes administrative accounts. In some organizations, administrators use standard user accounts for day to day tasks, and separate administrator accounts for privileged access. A person with a standard user account and an administrator account only needs one Entra ID Governance license for both identities to be governed. Of course, they could also leverage Entra ID Governance’s Privileged Identity Management (PIM) to temporarily elevate the access rights of a single account, instead of maintaining two accounts.

 

To illustrate this scenario, let’s consider an organization called Contoso, which owns ZT Tires and Tailspin Toys. Mallory is hired by Contoso, which uses Lifecycle Workflows in Entra ID Governance to onboard her user account and grant her access to the resources she needs for her job. Her account receives an access package with an entitlement to ZT Tires’ ERP app, and she requests access to Tailspin Toys inventory management app. Because Mallory has an Entra ID Governance license in the Contoso tenant, her identity can be governed in the ZT Tires and Tailspin Toys tenants with no additional governance licenses – one person, one license.

 

Diego is an identity administrator whose user account is in the ZT Tires tenant. He uses a separate administrator account for privileged access tasks in Contoso, Tailspin Toys, and ZT Tires tenants. Because Diego has an Entra ID Governance license in the ZT Tires tenant, both his user and administrator identities can be governed in all three tenants with no additional governance licenses – again, one person, one license.

 

Entra ID Governance in Microsoft Entra External ID

 

The other announcement covered Entra External ID, Microsoft’s solution to secure customer and business collaborator access to applications. In November, I blogged about the licensing model to govern the identities of business guests in the B2B scenario for Entra External ID and shared that pricing would be $0.75 per actively governed identity per month. Because metered, usage-based pricing to govern the identities of business guests is a different model than the existing, licensed-based pricing model to govern the identities of employees, I’d like to share more detail.

 

A business guest identity in Entra External ID will accrue a single $0.75 charge in any month in which that identity is actively governed, no matter how many governance actions are taken on that identity. For example: 

 

A Contoso employee named Gerhart collaborates with Pradeep of Woodgrove Bank to produce Contoso’s quarterly financial statements. Contoso has deployed Entra External ID for its business partners such as Woodgrove Bank. In April, Pradeep accesses Contoso’s Microsoft Teams where Gerhart stores his quarterly reporting documents, but his Entra External ID has no identity governance actions taken on them, so it doesn’t accrue any charges.

 

In May, Pradeep receives an access package with an entitlement to Contoso’s accounting system, and Gerhart reviews Pradeep’s existing access to Contoso’s inventory management database, as well as to the Teams with the quarterly reporting documents. Because Pradeep’s identity in Entra External ID had identity governance actions taken on it, Contoso will accrue a $0.75 charge. Note that the charge is applied once, even though there were three identity governance actions taken during the month. Once that Entra External ID identity was governed in May, additional identity governance actions do not generate additional charges for that identity in May.

 

To learn more about Microsoft Entra ID Governance licensing, visit the Licensing Fundamentals page.

 

 

Read more on this topic 

Entra ID multi-tenant collaboration  Microsoft Entra External ID general availability 

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

 


Microsoft Entra Suite now generally available

Today we announced the general availability of Microsoft Entra Suite - the industry’s most comprehensive secure access solution for the workforce. The Microsoft Entra Suite delivers the most comprehensive Zero Trust user access solution and enables organizations to converge access policy engine across identities, endpoints, and private and public networks.     What is Microsoft

Today we announced the general availability of Microsoft Entra Suite - the industry’s most comprehensive secure access solution for the workforce. The Microsoft Entra Suite delivers the most comprehensive Zero Trust user access solution and enables organizations to converge access policy engine across identities, endpoints, and private and public networks.  

 

What is Microsoft Entra Suite? 

The Microsoft Entra Suite delivers a complete cloud-based solution for workforce access. It brings together identity and network access that secures employee access to any cloud or on-premises application and resource from any location, consistently enforces least privilege access, and improves the employee experience.​  

 

This new offering advances our vision for the Microsoft Entra product line that can serve as a universal trust fabric for the era of AI, securely connecting any trustworthy identity with anything, from anywhere. In a recent blog post we also shared the four stages of creating such trust fabric for your organization, starting with foundational Zero Trust controls, and extending it to protecting access for your workforce, protecting access for your customers and partners, and protecting access in any cloud. The Microsoft Entra Suite delivers the complete toolset for the second stage of this journey – secure access for your workforce.  

 

The Microsoft Entra Suite includes the following products:  

 

 

 

 

Microsoft Entra Private Access – an identity-centric Zero Trust Network Access that secures access to private apps and resources and reduces operational complexity and cost by replacing legacy VPNs.  Microsoft Entra Internet Access – an identity-centric Secure Web Gateway (SWG) for SaaS apps and internet traffic that protects against malicious internet traffic, unsafe or non-compliant content, and other threats from the open internet.  Microsoft Entra ID Governance – a complete identity governance and administration solution that automates identity and access lifecycle to ensure that the right people have the right access to the right apps and services at the right time.  Microsoft Entra ID Protection – an advanced identity solution that blocks identity compromise in real time using high-assurance authentication methods, automated risk and threat assessment, and adaptive access policies powered by advanced machine learning (also included in Microsoft Entra ID P2).   Microsoft Entra Verified ID - a managed verifiable credentials service based on open standards that enables real-time identity verification in a secure and privacy respecting way. Included in the Microsoft Entra Suite are premium Verified ID capabilities, starting with Face Check.     Microsoft Entra Suite enables you to:  Unify Conditional Access policies for identities and networks.  Ensure least privilege access for all users accessing all resources and apps.  Improve the user experience for both in-office and remote workers.  Reduce the complexity and cost of managing security tools from multiple vendors. 

 

Check out the Microsoft Entra Suite introductory video below:

 

 

Unify Conditional Access policies for identities and networks 

You only have to manage one set of policies in one portal to configure access controls for both identities and networks. Conditional Access evaluates any access request, no matter where it’s coming from, performing real-time risk assessment to strengthen protection against unauthorized access.  

 

Ensure least privilege access for all users accessing all resources and apps 

You can automate the access lifecycle from the day a new employee joins your organization, through all their role changes, until the time of their exit. No matter how long or multifaceted an employee’s journey, Microsoft Entra ID Governance ensures that your employees have the right access to just the applications and resources they need, helping prevent an adversary’s lateral movement in case of a breach.  

 

Improve the user experience for both in-office and remote workers 

You can ensure that employees enjoy a faster and easier onboarding experience, faster and more secure sign-in via passwordless authentication, single sign-on for all applications, and superior performance. Using a self-service portal, your employees can request access to relevant packages, manage approvals and access reviews, and view request and approval history. Face Check with Microsoft Entra Verified ID enables real-time verification of your employee's identity, which streamlines remote onboarding and self-service recovery of passwordless accounts.  

 

Reduce the complexity and cost of managing security tools from multiple vendors 

Since traditional on-premises security solutions don’t scale to the needs of modern cloud-first, AI-first environments, organizations are seeking ways to secure and manage their assets from the cloud. With the Microsoft Entra Suite, you can retire multiple on-premises security tools, such as traditional Virtual Private Networks (VPNs), on-premises Secure Web Gateways (SWGs), and on-premises identity governance. 

 

Microsoft Entra Suite is currently priced at $12 per user per month. Microsoft Entra P1 is a licensing and technical prerequisite. Please refer to the Microsoft Entra Suite pricing page for more detail. 

 

 

Join us for upcoming events! 

We encourage you to watch the Zero Trust spotlight on demand, where Microsoft experts and thought leaders dove deeper into these and other announcements, including the general availability of Entra Internet Access and Entra Private Access, which is part of the Microsoft Entra Suite.  

 

Additionally, register for the Tech Accelerator to join us on August 14, 2024, for a deep dive into the Microsoft Entra Suite, and Private Access and Internet Access products. 

 

 

Learn More 

The availability of the Microsoft Entra Suite marks a key milestone in our commitment to continue to provide a more seamless and robust secure access experience that will empower the workforce anywhere and everywhere. Learn more from the official announcement

 

Visit the Microsoft Entra Suite trial page to get started. 

 

Irina Nechaeva, General Manager, Identity and Network Access Product Marketing 

 

 

Read more on this topic 

Watch the Microsoft Entra Suite mechanics video  Microsoft Entra product page Microsoft Entra portal 

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

Dock

eIDAS 2.0: A Beginner's Guide

Professionals in identity companies often grapple with the complexities of evolving digital ID regulations. They must keep up with these changes to ensure compliance and leverage new opportunities. That's where eIDAS 2.0—the latest update to the European Union's digital identity framework—comes

Professionals in identity companies often grapple with the complexities of evolving digital ID regulations. They must keep up with these changes to ensure compliance and leverage new opportunities.

That's where eIDAS 2.0—the latest update to the European Union's digital identity framework—comes in.

Full article: https://www.dock.io/post/eidas-2


KuppingerCole

Diving Deeper: Recent Insights From the KuppingerCole Analysts’ Cybersecurity Council Meeting

by Berthold Kerl In the fast-changing landscape of cybersecurity, cooperation and sharing insights among professionals are essential for addressing challenges and influencing the future of digital safety. The KuppingerCole Analysts’ Cybersecurity Council, a notable group of more than 30 Chief Information Security Officers (CISOs) from various sectors, gathered for its second meeting of 2024 on Ju

by Berthold Kerl

In the fast-changing landscape of cybersecurity, cooperation and sharing insights among professionals are essential for addressing challenges and influencing the future of digital safety. The KuppingerCole Analysts’ Cybersecurity Council, a notable group of more than 30 Chief Information Security Officers (CISOs) from various sectors, gathered for its second meeting of 2024 on June 5 at the European Identity & Cloud Conference (EIC). This gathering continued the discussions initiated by the council on February 28, 2024, covering several important topics.

Diving Deeper into Cybersecurity Frontiers

The council's meeting agenda was rich and varied, reflecting the breadth and depth of challenges that cybersecurity professionals face today. Key topics discussed included:

Defense against Mis/Disinformation: The World Economic Forum’s 2024 Global Risk Report has stated that disinformation is the world’s top risk in the next two years. US Navy veteran Dr. Pablo Breuer and former US rep at the World Trade Organization Daniella Taveau provided insights into these risks and how organizations can mitigate them. The recommendations and outcomes highlight the importance of developing a comprehensive response plan for information across the organization. Proactive measures against disinformation should be implemented prior to any incidents. It is crucial to educate executives about the risks associated with deep fakes. Additionally, users and clients need guidance on where to find credible information and how to identify and report misleading content. Furthermore, it is essential to revise authentication processes for operations considered high-risk.

Harmonizing Regulatory Requirements: CISOs struggle with multi-regulatory requirements, which are sometimes unclear or even conflicting. KuppingerCole Analysts are working on a whitepaper that can serve as an open letter to authorities, as well as working on a tool to support multi-regulatory compliance, the KuppingerCole Compliance Navigator. Martin Kuppinger and Matthias Reinwarth jointly discussed this initiative. 

Passwordless for Consumers: Alejandro Leal, Senior Analyst at KuppingerCole Analysts, presented his latest Leadership Compass, which provides a comprehensive overview of the Passwordless Authentication for Consumers market. As demand for seamless and secure authentication experiences rises, the market for these solutions has grown significantly.

Cybersecurity Recommendations for 2024-2033: Annie Bailey, Research Director KuppingerCole presented the final workshop results for Recommendations 2024-2033. This report is based on work with experts and provides 8 recommendations for CISOs in preparing for 2033, such as CISOs should prioritize advocacy for resilience and recovery, maintaining fundamental cyber hygiene, and understanding the adversaries they face. Collaboration within the cybersecurity sector is essential to enhance transparency and security throughout supply chains. It's important to view AI not only as a potential risk but also as a valuable tool for mitigating those risks. A comprehensive approach to user-centric security is necessary, and identity security should be integral to the organization’s overall security framework. Additionally, CISOs must take a more proactive role in influencing both national and international regulations.

cyberevolution 2024: Berthold Kerl shared the preliminary event agenda, which covers 18 topics. The conference, set to place from December 3, 2024, to December 5, 2024, aims to blend discussions on futuristic cybersecurity innovations with foundational cyber hygiene practices, maintaining a global perspective with a strong European focus. Next Steps

The council's next meeting is scheduled for September 4th, 2024, promising to further the dialogue on these critical topics, fostering deeper insights and strategies to navigate the complex cybersecurity landscape. The following final meeting of 2024 will take place on December 4th, 2024, onsite during cyberevolution event in Frankfurt.

As the KuppingerCole Analysts’ Cybersecurity Council continues its vital work, the insights and outcomes from its meetings are a testament to the power of collaboration in advancing the field of cybersecurity. Through the shared expertise of its members, the council not only addresses the challenges of today but also shapes the cybersecurity frameworks of tomorrow.

Sunday, 04. August 2024

KuppingerCole

Lessons Learned from the CrowdStrike Incident

Matthias, Martin, John, Alexei, and Mike discuss the recent CrowdStrike incident and its impact on global players. They highlight the need for better software testing and validation processes to prevent such incidents. The conversation also touches on the importance of diversity in software solutions and the role of regulation in ensuring security. The analysts suggest measures such as phased roll

Matthias, Martin, John, Alexei, and Mike discuss the recent CrowdStrike incident and its impact on global players. They highlight the need for better software testing and validation processes to prevent such incidents. The conversation also touches on the importance of diversity in software solutions and the role of regulation in ensuring security. The analysts suggest measures such as phased rollout of updates, automated risk scoring, and improved backup and recovery processes. They emphasize the need for organizations to have resilience plans in place and to evaluate the tools and vendors they rely on.




Evernym

Understanding GDPR and Its Impact on Data Privacy Management

Understanding GDPR and Its Impact on Data Privacy Management The General Data Protection Regulation (GDPR) represents... The post Understanding GDPR and Its Impact on Data Privacy Management appeared first on Evernym.

Understanding GDPR and Its Impact on Data Privacy Management The General Data Protection Regulation (GDPR) represents one of the most significant updates to data privacy laws in recent history. Enforced by the European Union (EU) in May 2018, GDPR aims to protect the personal data of individuals within the EU and ...

The post Understanding GDPR and Its Impact on Data Privacy Management appeared first on Evernym.

Friday, 02. August 2024

KuppingerCole

Software Supply Chain Security: Are You Importing Problems?

by Alexei Balaganski Software supply chain security (SSCS) is a really curious subject. On the one hand, nearly everyone has an intuitive understanding of what SSCS means and how critical it can be for the success of a modern digital business. After all, we have seen the consequences of multiple large-scale incidents recently, which have all been labelled “supply chain attacks” by the press. A

by Alexei Balaganski

Software supply chain security (SSCS) is a really curious subject. On the one hand, nearly everyone has an intuitive understanding of what SSCS means and how critical it can be for the success of a modern digital business. After all, we have seen the consequences of multiple large-scale incidents recently, which have all been labelled “supply chain attacks” by the press.

A bit of history

Perhaps the first widely known event of this kind was the notorious SolarWinds hack in 2020, when a malicious actor managed to inject malware into a popular IT management tool that was then deployed to thousands of clients and used as an attack vector in multiple security breaches. In late 2023, we had the breach at Okta, a leading identity provider, that affected many of their enterprise customers, including several security vendors (who were, luckily, the first to raise the alarm). Finally, just a couple of weeks ago the entire world observed the catastrophic consequences of the botched software update by CrowdStrike, that literally grounded entire airlines and forced multiple banks and hospitals to halt their operations.

On the other hand, there still seems to be no common agreement on what exactly defines an issue as a supply chain attack and consequently, who should be responsible for the damage. Consider, for example, the recent case of attempted compromise of XZ Utils, when an unknown but possibly state-sponsored threat actor tried to infiltrate the open-source project and introduce a backdoor into a ubiquitous Linux utility.

Luckily, this attempt was not successful, but we do know how massive the potential consequences of an implanted backdoor could be – you need to look no further than Crypto AG, a Swiss cryptography provider that has posed as a front for a CIA operation for nearly 50 years. Multiple other vulnerabilities in popular open-source projects have been recognized as supply chain attacks as well: Heartbleed, Log4Shell, regreSSHion, etc. To be honest, the entire package management systems for popular languages like JavaScript or Python are currently such a mess that they can be considered huge attack vectors as well.

As a result, there seem to be a widespread opinion among not just the public, but industry experts as well, that software supply chain security is a field of cybersecurity that is entirely focused on dealing with dangerous open-source libraries and is thus primarily a responsibility of software developers. While there is definitely a grain of truth in this sentiment, it quickly becomes completely irrelevant when we try to come up with practical recommendations for organizations affected by an ongoing incident or just looking for measures to prevent a future one from happening. Most of those organizations are not directly involved in software development and simply want to be more resilient against problems caused by their suppliers.

What is software supply chain security anyway?

Software supply chain security involves managing risks associated with software acquired from third-party sources. In today's interconnected world, every organization uses third-party software, including operating systems, commercial off-the-shelf software, custom applications produced by contractors or, in some cases, even programs developed in-house. Ensuring the integrity and security of all these software components is paramount yet challenging, especially considering the confusion about the responsibilities of the multiple parties involved.

Organizations face increasing regulatory pressures, including NIS2 and DORA, which mandate constant risk management and supply chain risk assessments. These regulations require organizations to understand their entire supply chain, including indirect dependencies. Typically, end users lack in-depth knowledge about software development. They seek compliance with regulations without delving into the technical intricacies and often this can lead to costly mistakes.

Perhaps the biggest misconception about SSCS is that it falls under the responsibility of an organization’s cybersecurity team. What the CrowdStrike incident has clearly demonstrated is that having too much security can indeed be bad. Companies that were following the security best practices – deploying agents on every machine, automated deployment of patches, etc.- were in the end affected the most, having to deal with much more damage.

This reminds me again about the decade-long debate between the IT and OT security experts and people ridiculing the latter for placing process continuity and personal safety above the quick response to security breaches. Well, how the tables have turned… If the CrowdStrike incident is supposed to teach us anything, it should be that security is never the goal, but just a means for achieving better business resilience against catastrophic events and finding the right balance between security and availability should be the guiding principle for everyone.

So, how about calling it “Software Supply Chain Risk Management” instead?

The pragmatic approach

As analysts, we strive to offer practical advice to every organization. However, such advice would be substantially different for various organizations and stakeholders within them. For example, businesses with strong internal software development activities, such as CrowdStrike itself, obviously need to invest a lot into securing their entire software development lifecycle. The market nowadays offers numerous solutions ranging from universal application security testing platforms to highly specialized solutions, like the ones for managing secure artifact delivery or producing the software bill of materials.

Ever more important is to understand that the traditional view of the software development lifecycle within a single organization simply no longer reflects the reality of our interconnected world. The life of a software product does not end at the moment it is delivered to a customer – in fact, it only just begins. And since it no longer remains in the hands of one party, the responsibility must be shared properly among several stakeholders. We have figured this model out for cloud services already – why not adopt something similar for every software product?

In a sense, Software Supply Chain as a strategy, just like Zero Trust, cannot be bought off-the-shelf. It requires a combination of careful planning, changing the business processes, improving communications with your suppliers and customers and, of course, a substantial change in regulations. We are already seeing the first laws introducing stronger punishment for organizations involved in critical infrastructure, with their management facing jail time for heavy violations. Well, perhaps the very definition of “critical” must be revised to include operating systems, public cloud infrastructures, and cybersecurity platforms, considering the potential global impact of these tools on our society.

But how can end-user organizations influence these processes if they are not involved in developing the software they are using? My colleague Mike Small has already published his recommendations right after the CrowdStrike incident. To his practical advice I can only add another bit of philosophical musing: security is impossible without trust, but too much trust is even more dangerous than too little security.

Start utilizing the Zero Trust approach for every relationship with a supplier. This can be understood in various ways: from not taking any marketing claim at its face value and always seeking a neutral 3rd party opinion to very strict and formal measures like requiring a high Evaluation Assurance Level of the Common Criteria (ISO 15408) for each IT service or product you deploy. If you are looking for more information and practical advice, why not join us at the upcoming cyberevolution 2024 conference in Frankfurt this December? Software Supply Chain Security, Cyber Resilience, and NIS2 and DORA regulatory compliance will be major topics presented by industry experts.

Thursday, 01. August 2024

Spruce Systems

Sprucing Up Our Brand Identity

We have a new look, more aligned with our overall brand strategy. In this post, we'll talk more about our evolution and the creative process behind it.
Where We Began

SpruceID’s mission, since the founding of our company, is to let users control their identity and data across the web. From a very early (still accurate) SpruceID blog, “Our ultimate goal [is to] enable a future where everyone has access to a secure, private, and highly portable set of credentials and data they can take with them across the digital universe. In this future, these credentials will be inalienably yours, to use when necessary to gain access to a given area or activity.”

In the early days, we found our roots (yes, tree puns never get old) within the Web3 developer ecosystem, building a suite of open-source libraries to connect on-chain and off-chain identifiers and activity. Our early branding reflected this developer audience focus, featuring a dark mode design with futuristic graphic imagery and technical language that resonated with developers deeply embedded in the Web3 ecosystem. 

A version of our website from 2022.

We quickly learned that in addition to the tens of millions of people using cryptographic keys on Ethereum, there is another major audience actively using public-private keypairs that is already deeply entrenched in the business of issuing credentials to people – governments. 

In 2022, SpruceID won a contract with the California DMV to build out a mobile driver’s license solution and wallet application for Californians. This project underscored the importance of privacy-forward, standards-compliant verifiable digital credentials (VDCs) that can be seamlessly integrated into both public and private sector systems​. We were, and continue to be, excited and honored to collaborate with true visionaries at the California DMV who have worked tirelessly to champion the privacy and security of users in the pilot program. 

Since our initial foray into the public sector, we’ve found a strong foothold and have begun work on VDC implementation contracts with multiple state-level and national governments that are ideologically aligned with our values.

Today, we are excited to announce a significant rebrand that aligns with our expanded mission to serve not only Web3 enthusiasts but also governments and enterprises. Our updated look features lighter colors, more approachable and tangible design elements, and our messaging is crafted to be inclusive and easily understood by stakeholders at all levels.

The Evolution of Our Brand Identity

At the heart of every brand is a visual identity that resonates with its audience and communicates core values. We began our rebrand journey with the question: Who are we designing for? What do they care about? What motivates them? How do they like to learn?

Throughout this initiative, we relied heavily on research about our key audiences to ensure that all elements of our new brand (logo, colors, fonts, and tone) communicate the values most important to us and resonate with those with whom we build relationships. 

Read on to get an inside look at our creative process, led by our Sr. Designer, Scotty Matthewman.

Establishing our Brand Values

At the start of this project, we distilled our values into 5 core attributes: trustworthy, inventive, pioneering, conscientious, and secure. These values influenced every aspect of our rebranding effort, from the tone of our communication to the visual elements of our identity. 

We broke out our core values to brainstorm synonyms (below) that might spark visual reactions within our audience, and allow those experiencing our brand to feel heard and served by the solutions we offer. We ultimately aimed to create a brand that not only comes across as professional and reliable but also feels inclusive and innovative.

Bringing Our New Logo To Life

The evolution of our logo began with a focus on our core mission: empowering people to have greater control over their personal data. We explored many different iterations and directions, drawing inspiration from our design values while also trying to capture elements of identity, innovation, and security.

We liked the idea of many small elements, representing that people have many facets of their identity.

This direction felt representative of a few elements that resonated with us:

With a version we felt excited about, we decided to validate our hypothesis with real people within our key audiences. We surveyed a group of users to share feedback and preferences among three different logo iterations (see survey results below). 

With positive user feedback on the logo direction, we shifted our focus to defining our brand colors, fonts, and imagery.

Color Palette: Balancing Trust and Innovation

Choosing the right color palette was important in setting the tone for our brand. We wanted a color that conveyed trust and safety, which is traditionally represented by blue. However, we also wanted to avoid the commonly used vibrant blue, so our solution was a muted blue with a slight lean towards purple, creating a balance that feels trustworthy, innovative and approachable.

The primary colors we landed on for our new brand are ‘Spruce’ blue, warm white, and black, complemented by warm neutrals with splashes of purple and green for vibrancy. This combination helps us stand out while maintaining a professional, inviting, and authentic appearance that resonates with our audience.

Font Choices: Merging Tradition with Modernity

In selecting our fonts, we aimed to balance modernity and readability. Initially, we experimented with sans-serif fonts, which are notoriously clear and accessible. We wanted something that was not extremely emotive or overly playful, but also that would allow us to be memorable and unique.

Our final choice includes Switzer for body text, known for its versatility and legibility, and Garamond for headers—a serif font that is also easily legible and adds a touch of traditional elegance. These design choices emphasize our commitment to humanizing technology.

Visual Assets: Clarity and Functionality

Our visual assets, including mockups, photos, and vector illustrations, play an important role in communicating complex technical concepts to those with varying levels of technical expertise. We prioritize real-world mockups over abstract representations, ensuring our visuals are clear and directly tied to our message. 

This approach aligns with our value of inclusivity, making sure our content is accessible and understandable to all audiences.

A New Chapter for the SpruceID Brand

Evolving our look and feel as a company has allowed us to align more closely with our core values, while making a very technical industry less abstract and more approachable. As we continue to grow and innovate, our brand will remain an important tool in our journey to empower users and drive the modernization of digital identity. This is just the beginning of a new and exciting chapter for us.

If you want to see our new brand identity in practice, check out our newly redesigned website, spruceid.com.


Dock

Dock Launches Privacy-Preserving Credential Monetization

Zug, Switzerland – August 1st, 2024 – Dock announced today the launch of its Privacy-Preserving Credential Monetization feature within its Decentralized ID platform. This cutting-edge innovation enables organizations to generate new revenue streams by charging for the verification of Digital ID credentials that they issue.  With this advanced feature,

Zug, Switzerland – August 1st, 2024 – Dock announced today the launch of its Privacy-Preserving Credential Monetization feature within its Decentralized ID platform. This cutting-edge innovation enables organizations to generate new revenue streams by charging for the verification of Digital ID credentials that they issue. 

With this advanced feature, Dock's platform sets a new industry standard, empowering organizations to launch an ID Ecosystem for their partners to securely share and monetize verifiable credentials. This accelerates onboarding processes, enhances transaction speeds, and improves business efficiencies. Importantly, user privacy is protected, as issuers and ecosystem administrators cannot identify which specific user or credential has been verified.

Traditionally, issuing ID credentials has been a cost burden for issuers who make the investment to ensure credentials contain high-quality information. However, with Dock's new feature, ID companies can transform this expense into a new revenue stream by charging for credential verifications. Credentials are part of an ecosystem where verifiers must pay a price for each verification, making it easier for issuers to generate revenue from credential issuance. This innovation enhances the economic viability for all stakeholders within a Digital ID Ecosystem.

Dock's technology uses Keyed Verification Anonymous Credential (KVAC) cryptography to ensure that credentials can only be verified by members of an ecosystem with a billing relationship.

Privacy-Preserving Monetization

At Dock, privacy is our priority. Our Credential Monetization feature ensures that ecosystem administrators can track paid verifications but cannot identify which specific user or credential has been verified, preserving confidentiality and fostering a trust-rich environment. Users must give explicit consent for each verification, maintaining their control over their digital identity.

“In the past, a lack of ways in which organizations can generate revenue from verifiable credentials has significantly constrained adoption of this amazing technology. With the release of this new feature, clearly defined business models are now integrated into Dock’s issuance and verification platform enabling entities to roll out new products at scale.” said Nick Lambert, Dock’s CEO.

About Dock

Dock’s Decentralized Identity platform enables companies to turn verified ID data into trusted Reusable Digital ID Credentials, instantly verify their authenticity and get paid when they are verified by third parties. It comprises an API, a Web App, an ID wallet and a dedicated blockchain. Dock has been a leader in decentralized digital identity technology since 2017 and trusted by organizations in diverse sectors, including healthcare, finance, and education.


Holochain

hApps Spotlight: Relay

“We Needed This.”

There is a Holochain app on mobile phones. 

Shipping this fall, Volla Phone’s new Quintus model will have two Holochain apps preloaded on it. One of these is Relay. On its face, Relay is a simple chat app. But its impact is much deeper than that. Like Signal, Relay is fully encrypted. But unlike the industry standard for secure communication, Relay doesn’t use central servers, adding an additional layer of security and privacy. Relay also doesn’t need your phone number as it addresses its messages directly to your public key which acts as a decentralized digital identifier. Not only does Relay come preinstalled on the Quintus, but it will also be available for Windows, MacOS, Linux, and all Volla devices including the Volla Tablet

But how did we get here? There have been calls for Holochain to be on mobile for awhile, but it took a set of synergistic needs to make it happen. Volla needed an alternative to the big cloud providers. darksoil studio needed a mobile version of Holochain. And the world needs open source tech. 

No project, no technical development, springs up from thin air. It’s rather a process of many small steps and connections that come together to realize new possibilities.  

To tell the story of Relay, we reached out to the people involved to give a full view into this process.

Volla

It was just over a year ago that community dev Hedayat Abedijoo connected with Dr. Wurzer, bringing Nick Stebbings with him to the Volla Community Days where they demoed Holochain and made the first attempts at developing on mobile. Building on the fantastic early work of Nick and Hedayat, Holochain has been growing ties between Volla and our community since. Here is what Volla founder Dr. Wurzer has to say about their choice to integrate Holochain into their products:

“The big picture of Volla is a secure and independent communication infrastructure. A smartphone is an elementary component. The cloud is another important element. The only way to prevent external influence is distributed, highly encrypted edge computing. As soon as we process or manage user data, access could be forced or our service sanctioned.” —Dr. Wurzer, founder of Volla Phone

Volla’s respect for the privacy of their customers really sets them apart in the smartphone market. Following up on the above, we asked Dr. Wurzer what most excited him about the growing partnership with Holochain.

“The message of Volla is freedom through simplicity and security. Simplicity in the sense of convenience. And this convenience also includes the cloud. That's why Apple is so popular with the iCloud and why Google also has this offering. Together with Holochain, we can now tackle the mass market with high-performance hardware that can compete with an iPhone. If we manage to reach the mass market, we will give back privacy, security and self-determination to many consumers who are overwhelmed by technical issues and trends. We bring the power back to the people. It's not just protected communication, but also free access to information, which — I can only speak for Germany and Europe — is already restricted.” —Dr. Wurzer, founder of Volla Phone
Development

To develop Relay, Holochain brought in Aaron Brodeur and Tibet Sprague of Terran Collective along with our very own Eric Harris-Braun. Here is what they have to say about the development process:

“There was a steep learning curve at the beginning, not only deepening my understanding of  Holochain development, but also working with a stack that is pretty new to me: SvelteKit, Tailwind, Skeleton, and of course the P2P Shipyard code that allows it to run on Android using Tauri Mobile. All in all things have gone quite well for such a cutting edge project. The biggest challenge in the Holochain universe is figuring out what bugs are coming from my code and what might be coming from Holochain itself or from the experimental Shipyard code, and keeping up with the many moving pieces. Not to mention also working with a fairly new and evolving platform in Volla Phone and Volla OS.” —Tibet Sprague of Terran Collective
“It's been a huge joy to work with Dr. Wurzer. He's a charming and brilliant entrepreneur with incredible attention to detail, but also a big vision of a phone free from entanglement with the big silicon valley services, allowing people to connect with one another safely and securely. The power of Holochain is obvious to him, and so it was consequently really easy to talk through and work around Holochain's unique affordances — the tradeoffs are worth it!” —Aaron Brodeur of Terran Collective

What was the moment Relay came to life for you?

“The moment we got it running on a Volla Phone for the first time, and Eric and I were successfully able to chat using it was incredible! P2P chat on big tech free phones is such a massively exciting accomplishment and we are going to pull it off.” —Tibet Sprague of Terran Collective
“I was on zoom with Eric Harris-Braun when he first showed me Relay working on the phone. There were many moments early on where we were not 100% sure what we wanted could work on the phone. None of this has ever been tried before! I know on the one hand it's just an app, and at that moment it was just a few lines of text on an empty screen... but I definitely felt like I was witnessing – and contributing to — a historic moment!” —Aaron Brodeur of Terran Collective

What contribution to the Relay app are you most proud of?

“I'm most proud of the brand. I wanted to make something that was adjacent to and compatible with Volla's brand, but distinct. It doesn't often work out like this, but it was the first name and logo I proposed. I love the name because it speaks to how the data is gossiped around within a group – each member of the group is relaying messages on behalf of the other members of the group. The icon is a network diagram in the shape of an R.” —Aaron Brodeur of Terran Collective
p2p Shipyard by darksoil studio

Of course, none of this would have been possible if the hurdles to Holochain working on mobile weren’t solved. The highest applause goes to the team at darksoil studio who did the heavy lifting of building a Holochain plugin for Tauri so that Holochain apps can be deployed to all the platforms Tauri supports: Linux, Windows, MacOS, Android, and soon iOS (on the way). The p2p Shipyard gives developers an easy method for converting their Holochain applications to a diversity of platforms. For the mobile context Holochain is set to its “zero-arc” configuration where mobile nodes don’t have to hold a portion of the DHT like a normal Holochain node would. This saves on battery life and helps the application meet app store requirements. (Volla is using a variation that actually does hold full nodes thanks to their custom designed OS which enables tighter integration with Holochain.) 

p2p Shipyard was the key development that made Relay and our work with Volla really successful, but its potential affects the whole ecosystem. So let’s dig a bit deeper into their journey: 

Can you tell us about the experience of making Holochain mobile ready?

“It’s been a long road. We have been wanting Holochain on mobile for a long time, and ultimately, we needed it badly enough for ourselves that we went ahead and did it. The p2p Shipyard is our second tool to enable Holochain to work on mobile. Our first one, which used firebase, we knew wouldn’t work long term, but we needed to test our app with users, so we went ahead and did it and ultimately, the learnings from that process contributed to the p2p Shipyard. It’s been a challenging and empowering experience, and it’s not done yet! ” —Eric Bear of darksoil studio

How do you see P2P Shipyard growing in the next year?

“Over the next year we hope to see the p2p Shipyard get put to use! We’re hoping to see a number of Holochain apps ship and function across platforms over this next year, and we’re already working with a few projects in the Holochain ecosystem to help them get their hApps into people’s hands (literally). ” —Eric Bear of darksoil studio
Get Involved

To be one of the first people using Relay, you can support Volla through their Kickstarter campaign where they are fundraising for their initial production run of the Quintus. They hit their funding goal in the first 3 hours, but thankfully you can still get a phone from them. 

We don’t know when Relay will be more widely available in app stores, but we expect that to be in the works.

And as for p2p Shipyard, they have a wonderfully innovative funding model which we hope to see more of as it models open source ethics and sustainable business practices all in one. 

Support p2p Shipyard

darksoil wants open source development to be more sustainable, so with the p2p Shipyard, they’re using an experimental funding model called retroactive crowdfunding.

Basically, they went and built the software, and then are funding after the fact. Once their retroactive crowdfunding goal of $100k is met, the p2p Shipyard will be free and open source forever.

Currently, the p2p Shipyard is source-available, so the code is publicly visible to audit, but a license is needed to use it. 

During the source-available phase, anyone interested in using the p2p Shipyard can reach out to them for a license; and all license fees will go towards the retroactive crowdfunding goal.

Once they meet the goal, and the p2p Shipyard is open source, they will continue to offer support services to maintain, improve, and adapt the p2p Shipyard to meet more people’s needs.

darksoil welcomes anyone invested in the Holochain ecosystem to support this infrastructure with a donation.

Wednesday, 31. July 2024

TBD on Dev.to

Simplifying Cross-Platform Payments with DAPs

"Dap me up!" is a colloquial term followed by a gesture used in Western cultures to greet people or express solidarity. At TBD, we're adding a new meaning to this phrase with Decentralized Agnostic Paytags (DAPs), an open source approach designed to simplify peer-to-peer payments across various applications. Solving an Awkward Issue Peer-to-peer (P2P) payment applications have exis

"Dap me up!" is a colloquial term followed by a gesture used in Western cultures to greet people or express solidarity. At TBD, we're adding a new meaning to this phrase with Decentralized Agnostic Paytags (DAPs), an open source approach designed to simplify peer-to-peer payments across various applications.

Solving an Awkward Issue

Peer-to-peer (P2P) payment applications have existed since the late '90s, starting with tools like PayPal. With the rise of smartphones, innovative mobile apps like Venmo, Zelle, and Block's very own Cash App have made it easier to exchange funds directly from our phones.

However, a persistent issue remains: the sender and recipient must use the same app to complete a transaction. People have personal and valid reasons for choosing their preferred payment apps.

This situation creates an uncomfortable, unspoken battle when you need to pay a friend after dinner or a contractor for a service, only to discover that you use CashApp while they use Venmo. Now, you both face the dilemma of deciding who will download a new app, set up a new account, and link it to their bank account.

Instead, P2P payment apps can use DAPs—agnostic unique identifiers stored in a registry—to identify and route payments to the correct destination across different platforms. This allows you and the recipient to financially "dap each other up" regardless of which apps you prefer.

Introducing Decentralized Agnostic Paytags (DAPs)

A DAP is a user-friendly handle for payments, structured as @local-handle/domain.

Here's an example: I love the handle blackgirlbytes. If I registered that handle on Cash App's DAP registry, my DAP would be @blackgirlbytes/cash.app. Similarly, if I registered that handle on DIDPay's DAP registry, my handle would be @blackgirlbytes/didpay.me.

Each DAP links to a Decentralized Identifier (DID) to help identify who you are, regardless of the platform. While your DID includes cryptographic keys for identity protection, it also contains your money address—a unique identifier that directs different payment systems where to send your funds.

Get Started with DAPs

The DAP ecosystem has two key actors: the payment platform that offers DAPs and the users who own the DAPs.

For Organizations: Any organization can enable users to create a DAP on their platform by setting up a DAP registry associated with their domain. This registry serves two main functions:

It allows users to sign up for DAPs. It maps users' DAPs with their DID and money address.

For Users: Once a DAP registry is available on your preferred platform, you can sign up for a DAP using your chosen handle.

If you're eager to experiment with DAPs but your preferred payment platform hasn't implemented a DAPs registry yet, you can obtain a DAP via our static DAP registry.

Keep Up to Date

DAPs debuted during a company-wide Hackathon at Block, where TBD, Cash App, and Square teams collaborated to bring this vision to life. As the DAP implementation continues to evolve, here are a few ways you can stay involved:

Join the TBD Discord Read the DAP specification Contribute to the open source SDKs: dap-js dap-go dap-kt dap-dart Create a DAP in our static DAP registry

Watch the video below to learn more


liminal (was OWI)

Link Index for Account Takeover Prevention in Banking

The post Link Index for Account Takeover Prevention in Banking appeared first on Liminal.co.

Caribou Digital

Identity and Migration — research update #2: Digital ID in Kenya

Identity and Migration — research update #2: Digital ID in Kenya (Authors: Keren Weitzberg, Nora Naji, and Emrys Schoemaker) Source: Craft Silicon website In this post, we share an updates from our ongoing research project in Kenya, outlining emerging Digital Identity innovation around proxy verification and Sharia compliant identification Agency banking and guarantors As part of our
Identity and Migration — research update #2: Digital ID in Kenya

(Authors: Keren Weitzberg, Nora Naji, and Emrys Schoemaker)

Source: Craft Silicon website

In this post, we share an updates from our ongoing research project in Kenya, outlining emerging Digital Identity innovation around proxy verification and Sharia compliant identification

Agency banking and guarantors

As part of our research, we are exploring how migrants (including under- and undocumented people) access financial services. Irregular migrants, refugees, and asylum seekers (and even some regular migrants) often struggle to access the same kinds of banking and financial services as citizens. A common barrier is the lack of a foundational credential that global regulatory regimes, such as the Financial Action Task Force, require in order for service providers to enable access to things like SIM cards and bank accounts.

We are meeting people, however, who are trying to develop fintech for underserved populations and regions, and for people who lack these foundational credentials. One such organization is Craft Silicon, headquartered in Kenya. We spoke to their Head of Islamic Banking, who told us about a Sharia-compliant product, currently being rolled out in Yemen in partnership with a local bank, which will enable people to open up low-balance bank accounts in order to receive remittances. This product, which will soon include a mobile wallet, will be accessible to undocumented people, who will be able to onboard by registering with an agent using a guarantor. The guarantor will have to attest to the client’s identity and provide their biometrics during the registration process.

This is not the first product that uses a guarantor system, but it is arguably unique in its claim to be Sharia compliant. The use of guarantors can offer undocumented people, including migrants, basic banking services. However, such products also have limited utility (such as their low balances) due to the need to comply with a strict financial regulatory environment.

Providing fintech to the unbanked may not be a panacea for poverty (despite some of the more quixotic claims behind the financial inclusion narrative), but such services are nevertheless desperately needed. Often, these products are piloted successfully but then abandoned or never scaled. We hope to see that change in the future.

Identity and Migration — research update #2: Digital ID in Kenya was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


Identity and Migration — Digital ID in Kenya

Identity and Migration — research update #1: Digital ID in Kenya Source: Julius Bitok, Kenya’s Principal Secretary, State Department for Immigration & Citizen Services (Authors: Keren Weitzberg, Nora Naji, and Emrys Schoemaker) In this post, we share an updates from our ongoing research project in Kenya, outlining emerging Digital Identity initiatives and the implications of recent
Identity and Migration — research update #1: Digital ID in Kenya Source: Julius Bitok, Kenya’s Principal Secretary, State Department for Immigration & Citizen Services

(Authors: Keren Weitzberg, Nora Naji, and Emrys Schoemaker)

In this post, we share an updates from our ongoing research project in Kenya, outlining emerging Digital Identity initiatives and the implications of recent protests.

The Shirika Plan

Last year, the Kenyan government announced the “Shirika Plan” — an initiative promising to transform the country’s refugee camps, Dadaab and Kakuma, into integrated settlements, in line with the UN’s Global Compact on Refugees (Shirika means ‘coming together’ in Swahili). This plan seeks to better integrate Kenya’s approximately 600,000 refugees and asylum seekers into the economy and various national systems. On the surface, it represents a positive policy shift for a government that has often been hostile to refugees. In the past, the Kenyan state has threatened to close the Dadaab Refugee Complex, periodically suspended refugee registration, and even deported Somali refugees in contravention of international law. Yet, as refugee rights advocates like Victor Nyamori of Amnesty International (interviewed here for The New Humanitarian) have suggested, this new plan may be hype, aimed at courting international donors, rather than meaningful policy change.

Why does this matter for those who follow digital identity developments?

Integrating migrants and refugees into national registration systems can have far-reaching effects — from ‘normalizing’ often discriminated groups to problematizing legal status. As Immigration and Citizenship Services Principal Secretary Julius Bitok explained during a roundtable at this year’s ID4Africa AGM in Cape Town, the Shirka Plan will also entail the incorporation of refugees into Kenya’s newly launched (and controversial) digital identity project known as Maisha Namba (Life Number). According to Bitok, refugees will receive Maisha cards and Maisha Nambas (unique identity numbers) alongside Kenyan citizens and residents. But what additional rights and services, if any, this will afford them is less than clear.

Time will tell how meaningful such developments are for the hundreds of thousands of refugees and asylum seekers living in Kenya, many of whom have been in the country for well over two decades. We will be following this issue closely as part of our empirical research in the country.

Gen Z Protests and IDs

Another key development is the Gen Z protests in Kenya. In June 2024, after the Kenyan Parliament passed a new and controversial finance bill, Kenya erupted in nation-wide anti-government protests driven primarily by Gen Z (those currently between the ages of 12 and 27). They protested against unemployment, economic equality, and corruption, soon being joined by larger segments of the population. Kenya’s government responded to the protests with brutal measures, including abductions and extrajudicial killings.

Despite the waning of protests over the last weeks, pressure on the Kenyan government remains high. In July, activists published an Action Plan to monitor the Ruto government’s efforts to reform key areas affecting young people, which was widely circulated on social media.

Notably, one of the action points in the document concerns the growing costs of identification. The Action Plan calls on the government to “ban the request of government issued documents for job seekers except for a national identification card; drop the replacement ID fee from KES. 1,000 fee to KES. 200; lower the drivers’ license renewal fee by 25% and make all licenses renewable every three years”.

In Kenya, where youth unemployment is at a record high, young job seekers are increasingly frustrated with needing to provide extraneous documents, such as certificates of good conduct from the police or tax compliance certificates from the Kenya Revenue Authority, each with a requisite fee. Such demands are especially problematic for certain ethnic and religious groups in Kenya, particularly Muslims, who have faced historic discrimination in access to IDs and legal documents. As this article explains, these various “certificates will cost a jobless Kenyan upwards of Sh5,000.” Add to that the new charges for national IDs. Last year, the government announced that ID cards, which were previously free, would cost new applicants 1,000 Kenyan shillings (roughly $6; £5) while the cost of replacing an ID would be increased 20-fold to 2,000 shillings, sparking widespread protests online. The price hikes were eventually blocked by the Kenya High Court. The government u-turned, lowering the costs for new IDs to KSh 300 and the fee for a replacement to KSh 1,000–still a marked rise over previous years.

The government may be trying to extract money from Kenya’s population and pay for its costly new digital identity project (Maisha Namba) by hiking fees. But amidst a cost-of-living crisis, these increased costs are not going down well with Kenya’s youth.

This controversy also reveals a key source of exclusion in Kenya and elsewhere: prohibitively high fees for mandatory government ID documents. As one Kenyan X user commented: “This fee undermines the constitutional right to identification, a fundamental necessity for all Kenyans.”

Identity and Migration — Digital ID in Kenya was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


Aergo

Hard Fork Timeline Update

We are announcing a rescheduling of our much-anticipated hard fork, initially set for July 2024. This decision highlights our dedication to providing a secure, compliant, and feature-rich platform. Our adjusted timeline now aims for August 2024. The new specifications and features of the hard fork will include the following: Prioritizing Security and Compliance To support various use cases like

We are announcing a rescheduling of our much-anticipated hard fork, initially set for July 2024. This decision highlights our dedication to providing a secure, compliant, and feature-rich platform. Our adjusted timeline now aims for August 2024. The new specifications and features of the hard fork will include the following:

Prioritizing Security and Compliance

To support various use cases like Security Token Offerings (STOs), we are integrating new compliance-related features into the mainnet. This includes functionalities such as contract whitelists and blacklists, designed to meet stringent regulatory requirements and enhance overall security.

Integrating Composable Transactions

One of the most exciting features we are adding is support for composable transactions. This enhancement, detailed in our Composable Transactions documentation, enables more intelligent blockchain use cases, including those leveraging machine learning(ML). Users will be able to call contracts using plain text, simplifying interactions. This transparency in on-chain smart contract usage and management will significantly streamline operations and improve usability.

Text-Based Smart Contract Deployment

We are also introducing a new feature for deploying smart contracts using plain text, as outlined in our GitHub pull request. By managing smart contract source code directly on-chain, we can better support intelligent blockchain use cases like those involving ML. This approach guarantees the generation of deterministic bytecode, thereby enhancing security. On-chain management of smart contract source code ensures transparency and reliability, which are crucial for advanced blockchain applications.

Developing On-Chain ML Models

Our team is developing ML models that can be used directly on our mainnet alongside smart contracts. Unlike large language models like ChatGPT, which require substantial resources and are not specifically designed for blockchain integration, we are benchmarking smaller models like Microsoft’s Phi-3. These models are optimized for on-chain use, ensuring they are efficient and suitable for enterprise and mainnet applications. This research aims to enable seamless integration of intelligent features within our blockchain environment. We will provide more details about these ML models soon, as they will play a crucial role in enhancing the Aergo platform.

Looking Ahead

While the delay may be disappointing, we will provide regular updates as we progress toward the new timeline. We focus on delivering a blockchain platform with the highest performance and security standards.

We appreciate your understanding and continued support as we work diligently to bring these significant enhancements to life. Stay tuned for more updates, and thank you for being a vital part of our community.

Hard Fork Timeline Update was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.